Jan 29 09:06:18 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 09:06:18 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 09:06:19 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 09:06:20 crc kubenswrapper[4771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 09:06:20 crc kubenswrapper[4771]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 09:06:20 crc kubenswrapper[4771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 09:06:20 crc kubenswrapper[4771]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 09:06:20 crc kubenswrapper[4771]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 09:06:20 crc kubenswrapper[4771]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.428059 4771 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434440 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434466 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434471 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434480 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434487 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434493 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434498 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434502 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434508 4771 feature_gate.go:330] unrecognized feature gate: Example Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434512 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434517 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434521 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434525 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434529 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434533 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434537 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434540 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434544 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434548 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434551 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434555 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434558 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434562 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434567 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434571 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434575 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434579 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434582 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434586 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434590 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434594 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434600 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434605 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434610 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434614 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434619 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434623 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434628 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434632 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434636 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434640 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434644 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434649 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434652 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434656 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434660 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434663 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434667 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434672 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434677 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434681 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434684 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434688 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434718 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434722 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434726 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434731 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434736 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434740 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434744 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434751 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434756 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434760 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434765 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434769 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434773 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434778 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434784 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434788 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434792 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.434796 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443347 4771 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443404 4771 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443418 4771 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443428 4771 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443437 4771 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443443 4771 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443451 4771 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443458 4771 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443463 4771 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443468 4771 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443474 4771 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443479 4771 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443484 4771 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443489 4771 flags.go:64] FLAG: --cgroup-root="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443495 4771 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443501 4771 flags.go:64] FLAG: --client-ca-file="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443507 4771 flags.go:64] FLAG: --cloud-config="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443516 4771 flags.go:64] FLAG: --cloud-provider="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443521 4771 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443532 4771 flags.go:64] FLAG: --cluster-domain="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443536 4771 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443541 4771 flags.go:64] FLAG: --config-dir="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443546 4771 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443551 4771 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443559 4771 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443563 4771 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443569 4771 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443574 4771 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443579 4771 flags.go:64] FLAG: --contention-profiling="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443583 4771 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443588 4771 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443593 4771 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443599 4771 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443607 4771 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443612 4771 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443617 4771 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443623 4771 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443628 4771 flags.go:64] FLAG: --enable-server="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443632 4771 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443639 4771 flags.go:64] FLAG: --event-burst="100" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443644 4771 flags.go:64] FLAG: --event-qps="50" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443649 4771 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443654 4771 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443658 4771 flags.go:64] FLAG: --eviction-hard="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443664 4771 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443669 4771 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443674 4771 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443679 4771 flags.go:64] FLAG: --eviction-soft="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443683 4771 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443688 4771 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443720 4771 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443724 4771 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443729 4771 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443734 4771 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443761 4771 flags.go:64] FLAG: --feature-gates="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443769 4771 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443779 4771 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443788 4771 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443794 4771 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443800 4771 flags.go:64] FLAG: --healthz-port="10248" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443806 4771 flags.go:64] FLAG: --help="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443812 4771 flags.go:64] FLAG: --hostname-override="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443817 4771 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443822 4771 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443828 4771 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443833 4771 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443839 4771 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443844 4771 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443851 4771 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443856 4771 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443862 4771 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443868 4771 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443873 4771 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443879 4771 flags.go:64] FLAG: --kube-reserved="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443885 4771 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443890 4771 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443895 4771 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443900 4771 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443906 4771 flags.go:64] FLAG: --lock-file="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443912 4771 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443919 4771 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443926 4771 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443945 4771 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443954 4771 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443960 4771 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443966 4771 flags.go:64] FLAG: --logging-format="text" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443972 4771 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443979 4771 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443985 4771 flags.go:64] FLAG: --manifest-url="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.443990 4771 flags.go:64] FLAG: --manifest-url-header="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444031 4771 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444036 4771 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444043 4771 flags.go:64] FLAG: --max-pods="110" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444047 4771 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444052 4771 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444056 4771 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444061 4771 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444066 4771 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444070 4771 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444075 4771 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444092 4771 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444096 4771 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444101 4771 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444105 4771 flags.go:64] FLAG: --pod-cidr="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444111 4771 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444119 4771 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444124 4771 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444128 4771 flags.go:64] FLAG: --pods-per-core="0" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444132 4771 flags.go:64] FLAG: --port="10250" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444136 4771 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444140 4771 flags.go:64] FLAG: --provider-id="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444145 4771 flags.go:64] FLAG: --qos-reserved="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444149 4771 flags.go:64] FLAG: --read-only-port="10255" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444153 4771 flags.go:64] FLAG: --register-node="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444160 4771 flags.go:64] FLAG: --register-schedulable="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444166 4771 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444177 4771 flags.go:64] FLAG: --registry-burst="10" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444182 4771 flags.go:64] FLAG: --registry-qps="5" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444188 4771 flags.go:64] FLAG: --reserved-cpus="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444193 4771 flags.go:64] FLAG: --reserved-memory="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444201 4771 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444207 4771 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444212 4771 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444218 4771 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444223 4771 flags.go:64] FLAG: --runonce="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444229 4771 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444234 4771 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444240 4771 flags.go:64] FLAG: --seccomp-default="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444246 4771 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444251 4771 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444257 4771 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444264 4771 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444270 4771 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444275 4771 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444279 4771 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444284 4771 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444288 4771 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444293 4771 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444297 4771 flags.go:64] FLAG: --system-cgroups="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444301 4771 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444311 4771 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444315 4771 flags.go:64] FLAG: --tls-cert-file="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444320 4771 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444327 4771 flags.go:64] FLAG: --tls-min-version="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444332 4771 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444337 4771 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444341 4771 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444346 4771 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444352 4771 flags.go:64] FLAG: --v="2" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444359 4771 flags.go:64] FLAG: --version="false" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444366 4771 flags.go:64] FLAG: --vmodule="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444373 4771 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444378 4771 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444527 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444533 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444539 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444543 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444547 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444553 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444558 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444564 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444569 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444574 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444577 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444581 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444584 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444588 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444592 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444595 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444599 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444602 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444606 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444609 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444613 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444616 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444621 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444626 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444632 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444636 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444641 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444645 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444648 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444653 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444656 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444660 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444664 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444667 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444670 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444674 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444678 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444682 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444685 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444706 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444712 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444715 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444719 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444723 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444727 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444733 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444742 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444747 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444752 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444756 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444761 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444765 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444770 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444774 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444778 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444782 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444786 4771 feature_gate.go:330] unrecognized feature gate: Example Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444790 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444795 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444802 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444807 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444817 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444822 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444828 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444833 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444838 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444844 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444850 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444858 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444862 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.444868 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.444886 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.454893 4771 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.455328 4771 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.455976 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456025 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456031 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456036 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456039 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456044 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456048 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456053 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456058 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456063 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456068 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456072 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456077 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456081 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456085 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456089 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456095 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456102 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456106 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456110 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456114 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456119 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456124 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456128 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456132 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456136 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456140 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456144 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456147 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456153 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456157 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456161 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456165 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456169 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456172 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456176 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456180 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456183 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456189 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456194 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456197 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456201 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456207 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456210 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456214 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456218 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456222 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456226 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456229 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456233 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456236 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456240 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456243 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456247 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456250 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456254 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456257 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456262 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456267 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456271 4771 feature_gate.go:330] unrecognized feature gate: Example Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456276 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456280 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456285 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456290 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456295 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456299 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456303 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456307 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456311 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456314 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456318 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.456326 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456463 4771 feature_gate.go:330] unrecognized feature gate: Example Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456469 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456474 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456478 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456482 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456486 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456490 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456494 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456499 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456503 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456508 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456511 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456515 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456519 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456523 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456527 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456530 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456534 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456539 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456542 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456546 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456550 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456553 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456557 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456561 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456565 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456570 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456573 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456577 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456581 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456584 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456588 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456592 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456597 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456602 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456607 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456611 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456615 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456618 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456623 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456627 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456630 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456635 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456639 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456644 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456648 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456651 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456655 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456659 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456664 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456669 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456673 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456677 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456680 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456684 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456689 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456711 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456717 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456721 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456725 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456729 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456732 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456736 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456740 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456743 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456747 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456750 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456754 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456758 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456762 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.456765 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.456772 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.456996 4771 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.462627 4771 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.462742 4771 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.463986 4771 server.go:997] "Starting client certificate rotation" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.464015 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.465065 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-02 18:05:36.468173248 +0000 UTC Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.465163 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.498002 4771 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.500118 4771 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.502286 4771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.527524 4771 log.go:25] "Validated CRI v1 runtime API" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.567650 4771 log.go:25] "Validated CRI v1 image API" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.569620 4771 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.575009 4771 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-09-01-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.575050 4771 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.657917 4771 manager.go:217] Machine: {Timestamp:2026-01-29 09:06:20.590937181 +0000 UTC m=+0.713777418 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b5e4e256-be21-43c8-be21-43d17dd34516 BootID:5127b061-1bf1-4563-9f7f-0d3b9538d51f Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4f:10:ad Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4f:10:ad Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c2:0b:1e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:26:0f:4f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:da:69:0c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d4:71:d8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:5e:47:4e:1c:27 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4a:e8:cc:d7:09:8e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.658207 4771 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.658512 4771 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.658848 4771 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.659017 4771 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.659055 4771 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.659262 4771 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.659273 4771 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.660328 4771 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.660362 4771 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.662073 4771 state_mem.go:36] "Initialized new in-memory state store" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.662187 4771 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.667943 4771 kubelet.go:418] "Attempting to sync node with API server" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.667965 4771 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.667989 4771 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.668005 4771 kubelet.go:324] "Adding apiserver pod source" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.668018 4771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.678365 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.678467 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.679633 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.679772 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.680866 4771 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.681820 4771 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.683036 4771 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684459 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684482 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684489 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684495 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684505 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684512 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684519 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684546 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684555 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684564 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684593 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.684620 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.685578 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.686331 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.686509 4771 server.go:1280] "Started kubelet" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.687138 4771 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.687137 4771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.688383 4771 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 09:06:20 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.689668 4771 server.go:460] "Adding debug handlers to kubelet server" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.690392 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.690441 4771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.690876 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.690884 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:36:06.358097356 +0000 UTC Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692042 4771 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692071 4771 factory.go:55] Registering systemd factory Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692095 4771 factory.go:221] Registration of the systemd container factory successfully Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692095 4771 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692078 4771 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692563 4771 factory.go:153] Registering CRI-O factory Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692597 4771 factory.go:221] Registration of the crio container factory successfully Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692729 4771 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692772 4771 factory.go:103] Registering Raw factory Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.692794 4771 manager.go:1196] Started watching for new ooms in manager Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.698124 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="200ms" Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.698077 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.698391 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.700407 4771 manager.go:319] Starting recovery of all containers Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.708091 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.98:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f28622bca4f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 09:06:20.686479145 +0000 UTC m=+0.809319372,LastTimestamp:2026-01-29 09:06:20.686479145 +0000 UTC m=+0.809319372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.711502 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.711629 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.711735 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.711819 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.711930 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712010 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712103 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712185 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712267 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712346 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712424 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712508 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712624 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712732 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712817 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.712903 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713007 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713091 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713171 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713280 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713367 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713450 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713536 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713622 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713798 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713886 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.713970 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714058 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714136 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714220 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714304 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714385 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714469 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714549 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714636 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714734 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714824 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714909 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.714999 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715093 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715178 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715259 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715336 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715422 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715501 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715579 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715664 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715770 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715877 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.715964 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716059 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716150 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716238 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716326 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716412 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716496 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716577 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716663 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716768 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716861 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.716939 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717017 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717099 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717182 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717269 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717351 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717432 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717509 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717585 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717669 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717767 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717849 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.717929 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718011 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718105 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718196 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718281 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718363 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718449 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718537 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718621 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718714 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718809 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.718926 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719013 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719105 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719186 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719264 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719342 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719429 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719509 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719593 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719749 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719838 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.719941 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720032 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720208 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720295 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720378 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720458 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720534 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720624 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720721 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720811 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720897 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.720982 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721065 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721164 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721277 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721359 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721452 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721539 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721628 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721735 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721835 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.721915 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722000 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722087 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722165 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722248 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722336 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722412 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722483 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722549 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722621 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722711 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722796 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722881 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.722972 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.723049 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725034 4771 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725110 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725130 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725150 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725163 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725174 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725188 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725202 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725220 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725233 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725244 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725255 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725265 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725278 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725290 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725304 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725315 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725325 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725337 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725360 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725375 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725388 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725401 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725413 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725423 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725435 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725461 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725472 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725483 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725496 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725506 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725517 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725528 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725539 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725548 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725559 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725571 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725582 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725592 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725603 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725616 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725626 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725639 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725650 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725662 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725675 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725686 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725684 4771 manager.go:324] Recovery completed Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.725709 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727093 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727166 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727183 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727202 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727217 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727230 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727245 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727260 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727275 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727288 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727303 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727318 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727333 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727346 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727359 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727371 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727386 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727399 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727412 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727425 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727438 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727449 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727461 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727473 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727486 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727499 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727510 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727520 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727533 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727545 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727556 4771 reconstruct.go:97] "Volume reconstruction finished" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.727565 4771 reconciler.go:26] "Reconciler: start to sync state" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.735998 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.738383 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.738433 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.738446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.740610 4771 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.740630 4771 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.740658 4771 state_mem.go:36] "Initialized new in-memory state store" Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.791077 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.829984 4771 policy_none.go:49] "None policy: Start" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.833811 4771 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.833873 4771 state_mem.go:35] "Initializing new in-memory state store" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.833983 4771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.836595 4771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.836644 4771 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 09:06:20 crc kubenswrapper[4771]: I0129 09:06:20.836683 4771 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.836754 4771 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 09:06:20 crc kubenswrapper[4771]: W0129 09:06:20.837464 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.837578 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.891367 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.899857 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="400ms" Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.937651 4771 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 09:06:20 crc kubenswrapper[4771]: E0129 09:06:20.991518 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.033210 4771 manager.go:334] "Starting Device Plugin manager" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.033464 4771 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.033487 4771 server.go:79] "Starting device plugin registration server" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.034463 4771 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.034485 4771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.034753 4771 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.034947 4771 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.034966 4771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.042155 4771 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.135677 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.137152 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.137229 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.137241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.137280 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.137962 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.138101 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.138100 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.98:6443: connect: connection refused" node="crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.152473 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.152527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.152537 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.152753 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.152944 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.152977 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.154016 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.154058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.154090 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.154886 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.154919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.154931 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.155113 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.155241 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.155282 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156171 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156299 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156337 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156656 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156682 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156707 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.156945 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.157007 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.157019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.157070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.157086 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.157094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.157242 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.157320 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.157348 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.158049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.158081 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.158092 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.158491 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.158541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.158551 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.158796 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.158847 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.159542 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.159573 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.159613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241182 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241267 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241362 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241417 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241460 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241602 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241775 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241816 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241868 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241901 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241927 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.241949 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.242011 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.242055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.301366 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="800ms" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.338686 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.340231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.340286 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.340297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.340335 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.340945 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.98:6443: connect: connection refused" node="crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343284 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343341 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343366 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343395 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343418 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343441 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343441 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343465 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343523 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343531 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343672 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343531 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343484 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343789 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343768 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343541 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343863 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343915 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343948 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.343955 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.344002 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.344013 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.344068 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.344136 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.344229 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.487369 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.509182 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.539427 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.546250 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.550163 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.687331 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.691119 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.691265 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.691154 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:55:46.258687449 +0000 UTC Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.710209 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f6e9f3a2fbf7e71cd2c436aceb5ab1713560fd2c75b6ffbfa23fb7ee13fc7a84 WatchSource:0}: Error finding container f6e9f3a2fbf7e71cd2c436aceb5ab1713560fd2c75b6ffbfa23fb7ee13fc7a84: Status 404 returned error can't find the container with id f6e9f3a2fbf7e71cd2c436aceb5ab1713560fd2c75b6ffbfa23fb7ee13fc7a84 Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.715012 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2c5ae2e5e854de9205e65a009099b112c5b9e95bb8ca03972bea24162ac67835 WatchSource:0}: Error finding container 2c5ae2e5e854de9205e65a009099b112c5b9e95bb8ca03972bea24162ac67835: Status 404 returned error can't find the container with id 2c5ae2e5e854de9205e65a009099b112c5b9e95bb8ca03972bea24162ac67835 Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.715262 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.715369 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.718025 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f969818648defb66a9deca12387c7779f0cbac4507da5c7ba168eda353bf9e36 WatchSource:0}: Error finding container f969818648defb66a9deca12387c7779f0cbac4507da5c7ba168eda353bf9e36: Status 404 returned error can't find the container with id f969818648defb66a9deca12387c7779f0cbac4507da5c7ba168eda353bf9e36 Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.718536 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-646ad6ec4b1a1f165793dad9ba362a4b2bab2e5909da605cc5cff61ddcf87646 WatchSource:0}: Error finding container 646ad6ec4b1a1f165793dad9ba362a4b2bab2e5909da605cc5cff61ddcf87646: Status 404 returned error can't find the container with id 646ad6ec4b1a1f165793dad9ba362a4b2bab2e5909da605cc5cff61ddcf87646 Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.723974 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3c1664f27c1d921907b41d6fdb64114870aa3176812628815ada405c99182cdf WatchSource:0}: Error finding container 3c1664f27c1d921907b41d6fdb64114870aa3176812628815ada405c99182cdf: Status 404 returned error can't find the container with id 3c1664f27c1d921907b41d6fdb64114870aa3176812628815ada405c99182cdf Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.741483 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.743439 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.743495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.743507 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.743543 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.744279 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.98:6443: connect: connection refused" node="crc" Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.752113 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.752219 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.841886 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f969818648defb66a9deca12387c7779f0cbac4507da5c7ba168eda353bf9e36"} Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.842863 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2c5ae2e5e854de9205e65a009099b112c5b9e95bb8ca03972bea24162ac67835"} Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.844166 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f6e9f3a2fbf7e71cd2c436aceb5ab1713560fd2c75b6ffbfa23fb7ee13fc7a84"} Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.845736 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c1664f27c1d921907b41d6fdb64114870aa3176812628815ada405c99182cdf"} Jan 29 09:06:21 crc kubenswrapper[4771]: I0129 09:06:21.846656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"646ad6ec4b1a1f165793dad9ba362a4b2bab2e5909da605cc5cff61ddcf87646"} Jan 29 09:06:21 crc kubenswrapper[4771]: W0129 09:06:21.915553 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:21 crc kubenswrapper[4771]: E0129 09:06:21.915660 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:22 crc kubenswrapper[4771]: E0129 09:06:22.103400 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="1.6s" Jan 29 09:06:22 crc kubenswrapper[4771]: I0129 09:06:22.544945 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:22 crc kubenswrapper[4771]: I0129 09:06:22.546364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:22 crc kubenswrapper[4771]: I0129 09:06:22.546409 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:22 crc kubenswrapper[4771]: I0129 09:06:22.546421 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:22 crc kubenswrapper[4771]: I0129 09:06:22.546446 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 09:06:22 crc kubenswrapper[4771]: E0129 09:06:22.547161 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.98:6443: connect: connection refused" node="crc" Jan 29 09:06:22 crc kubenswrapper[4771]: I0129 09:06:22.637788 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 09:06:22 crc kubenswrapper[4771]: E0129 09:06:22.639323 4771 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:22 crc kubenswrapper[4771]: I0129 09:06:22.687479 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:22 crc kubenswrapper[4771]: I0129 09:06:22.692480 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:52:42.767289164 +0000 UTC Jan 29 09:06:23 crc kubenswrapper[4771]: I0129 09:06:23.687903 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:23 crc kubenswrapper[4771]: I0129 09:06:23.693126 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:01:23.649160078 +0000 UTC Jan 29 09:06:23 crc kubenswrapper[4771]: E0129 09:06:23.705259 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="3.2s" Jan 29 09:06:23 crc kubenswrapper[4771]: W0129 09:06:23.989322 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:23 crc kubenswrapper[4771]: E0129 09:06:23.989393 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:24 crc kubenswrapper[4771]: W0129 09:06:24.092807 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:24 crc kubenswrapper[4771]: E0129 09:06:24.092894 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:24 crc kubenswrapper[4771]: W0129 09:06:24.101069 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:24 crc kubenswrapper[4771]: E0129 09:06:24.101127 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.148334 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.150266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.150304 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.150320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.150349 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 09:06:24 crc kubenswrapper[4771]: E0129 09:06:24.151075 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.98:6443: connect: connection refused" node="crc" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.688226 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.693435 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:26:08.125020513 +0000 UTC Jan 29 09:06:24 crc kubenswrapper[4771]: W0129 09:06:24.847600 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:24 crc kubenswrapper[4771]: E0129 09:06:24.847726 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.857416 4771 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5" exitCode=0 Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.857578 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5"} Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.857645 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.859355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.859411 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.859425 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.860524 4771 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81" exitCode=0 Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.860613 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81"} Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.860733 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.862345 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.862425 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.862442 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.863668 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a" exitCode=0 Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.863752 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a"} Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.863807 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.864707 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.864728 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.864737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.866589 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.867423 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.867449 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.867462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.868596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393"} Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.868668 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e"} Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.868682 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c"} Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.868711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb"} Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.868780 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.870256 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.870281 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.870291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.872020 4771 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e" exitCode=0 Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.872063 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e"} Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.872186 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.873125 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.873155 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:24 crc kubenswrapper[4771]: I0129 09:06:24.873164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.687923 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.693962 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:50:48.49627036 +0000 UTC Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.816678 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.878532 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.878609 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.878623 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.878653 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.882921 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.882983 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.883003 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.883162 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.884380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.884428 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.884436 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.888551 4771 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d" exitCode=0 Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.888643 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.888946 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.890327 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.890403 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.890420 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.897956 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347"} Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.897988 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.898164 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.899842 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.899891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.899910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.900948 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.901008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:25 crc kubenswrapper[4771]: I0129 09:06:25.901025 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.687339 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.694671 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:53:18.977971825 +0000 UTC Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.903498 4771 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91" exitCode=0 Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.903605 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91"} Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.903689 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.904951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.904987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.904996 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.908575 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2"} Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.908688 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.908731 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.908813 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.908971 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.909032 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.910272 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.910318 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.910336 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.918611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.918673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.918685 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.918738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.918790 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.918817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.918643 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.919046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.919113 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:26 crc kubenswrapper[4771]: I0129 09:06:26.944464 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.169201 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.351879 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.353800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.353848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.353866 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.353899 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.694784 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:32:24.215382317 +0000 UTC Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916342 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c"} Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916404 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3"} Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916423 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f"} Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916436 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687"} Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916442 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916504 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916449 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544"} Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916651 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.916554 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.917662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.917711 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.917721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.917780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.917812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.917824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.918968 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.919004 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:27 crc kubenswrapper[4771]: I0129 09:06:27.919017 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.695357 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:27:38.001081231 +0000 UTC Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.698761 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.698958 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.700207 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.700244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.700256 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.705790 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.816931 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.817079 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.918527 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.918553 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.918885 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.919640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.919749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.919762 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.919768 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.919791 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.919803 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.920555 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.920617 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:28 crc kubenswrapper[4771]: I0129 09:06:28.920635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:29 crc kubenswrapper[4771]: I0129 09:06:29.009081 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:29 crc kubenswrapper[4771]: I0129 09:06:29.696442 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:36:12.431672786 +0000 UTC Jan 29 09:06:29 crc kubenswrapper[4771]: I0129 09:06:29.923286 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:06:29 crc kubenswrapper[4771]: I0129 09:06:29.923418 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:29 crc kubenswrapper[4771]: I0129 09:06:29.924962 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:29 crc kubenswrapper[4771]: I0129 09:06:29.925058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:29 crc kubenswrapper[4771]: I0129 09:06:29.925102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:30 crc kubenswrapper[4771]: I0129 09:06:30.696861 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:00:38.965171128 +0000 UTC Jan 29 09:06:31 crc kubenswrapper[4771]: E0129 09:06:31.042260 4771 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.221389 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.221655 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.222977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.223016 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.223026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.435242 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.435482 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.437215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.437311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.437332 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.697560 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:25:01.983696089 +0000 UTC Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.822256 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.822603 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.824237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.824324 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:31 crc kubenswrapper[4771]: I0129 09:06:31.824340 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:32 crc kubenswrapper[4771]: I0129 09:06:32.698337 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:05:20.637764635 +0000 UTC Jan 29 09:06:33 crc kubenswrapper[4771]: I0129 09:06:33.699577 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:46:26.034382362 +0000 UTC Jan 29 09:06:34 crc kubenswrapper[4771]: I0129 09:06:34.700051 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:52:45.133962319 +0000 UTC Jan 29 09:06:35 crc kubenswrapper[4771]: I0129 09:06:35.701100 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:52:05.649057329 +0000 UTC Jan 29 09:06:36 crc kubenswrapper[4771]: I0129 09:06:36.701455 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:15:34.730197962 +0000 UTC Jan 29 09:06:36 crc kubenswrapper[4771]: I0129 09:06:36.881513 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 09:06:36 crc kubenswrapper[4771]: I0129 09:06:36.881798 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:36 crc kubenswrapper[4771]: I0129 09:06:36.883275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:36 crc kubenswrapper[4771]: I0129 09:06:36.883338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:36 crc kubenswrapper[4771]: I0129 09:06:36.883350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:36 crc kubenswrapper[4771]: E0129 09:06:36.906352 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 29 09:06:36 crc kubenswrapper[4771]: E0129 09:06:36.947013 4771 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 09:06:37 crc kubenswrapper[4771]: I0129 09:06:37.169861 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:06:37 crc kubenswrapper[4771]: I0129 09:06:37.169989 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 09:06:37 crc kubenswrapper[4771]: E0129 09:06:37.355242 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 29 09:06:37 crc kubenswrapper[4771]: E0129 09:06:37.621423 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188f28622bca4f29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 09:06:20.686479145 +0000 UTC m=+0.809319372,LastTimestamp:2026-01-29 09:06:20.686479145 +0000 UTC m=+0.809319372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 09:06:37 crc kubenswrapper[4771]: I0129 09:06:37.683356 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 09:06:37 crc kubenswrapper[4771]: I0129 09:06:37.683439 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 09:06:37 crc kubenswrapper[4771]: I0129 09:06:37.702557 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:21:34.118280473 +0000 UTC Jan 29 09:06:38 crc kubenswrapper[4771]: I0129 09:06:38.703481 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:29:26.098261994 +0000 UTC Jan 29 09:06:38 crc kubenswrapper[4771]: I0129 09:06:38.818128 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:06:38 crc kubenswrapper[4771]: I0129 09:06:38.818224 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 09:06:39 crc kubenswrapper[4771]: I0129 09:06:39.704565 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:02:36.392636842 +0000 UTC Jan 29 09:06:40 crc kubenswrapper[4771]: I0129 09:06:40.704875 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:05:20.226181751 +0000 UTC Jan 29 09:06:41 crc kubenswrapper[4771]: E0129 09:06:41.042401 4771 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 09:06:41 crc kubenswrapper[4771]: I0129 09:06:41.228299 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:41 crc kubenswrapper[4771]: I0129 09:06:41.228469 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:41 crc kubenswrapper[4771]: I0129 09:06:41.229649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:41 crc kubenswrapper[4771]: I0129 09:06:41.229684 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:41 crc kubenswrapper[4771]: I0129 09:06:41.229708 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:41 crc kubenswrapper[4771]: I0129 09:06:41.705621 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:13:02.549490454 +0000 UTC Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.175931 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.176219 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.177622 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.177665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.177679 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.185412 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.693810 4771 trace.go:236] Trace[319218500]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 09:06:27.883) (total time: 14810ms): Jan 29 09:06:42 crc kubenswrapper[4771]: Trace[319218500]: ---"Objects listed" error: 14810ms (09:06:42.693) Jan 29 09:06:42 crc kubenswrapper[4771]: Trace[319218500]: [14.81019481s] [14.81019481s] END Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.693857 4771 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.698525 4771 trace.go:236] Trace[571767319]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 09:06:29.997) (total time: 12700ms): Jan 29 09:06:42 crc kubenswrapper[4771]: Trace[571767319]: ---"Objects listed" error: 12700ms (09:06:42.698) Jan 29 09:06:42 crc kubenswrapper[4771]: Trace[571767319]: [12.700924609s] [12.700924609s] END Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.698576 4771 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.698606 4771 trace.go:236] Trace[196097133]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 09:06:31.008) (total time: 11690ms): Jan 29 09:06:42 crc kubenswrapper[4771]: Trace[196097133]: ---"Objects listed" error: 11689ms (09:06:42.698) Jan 29 09:06:42 crc kubenswrapper[4771]: Trace[196097133]: [11.690015134s] [11.690015134s] END Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.698841 4771 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.698757 4771 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.700510 4771 trace.go:236] Trace[1256750192]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 09:06:27.863) (total time: 14836ms): Jan 29 09:06:42 crc kubenswrapper[4771]: Trace[1256750192]: ---"Objects listed" error: 14836ms (09:06:42.700) Jan 29 09:06:42 crc kubenswrapper[4771]: Trace[1256750192]: [14.836661768s] [14.836661768s] END Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.700532 4771 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.706105 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:10:04.205211176 +0000 UTC Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.726115 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55098->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.726140 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55112->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.726197 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55098->192.168.126.11:17697: read: connection reset by peer" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.726235 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55112->192.168.126.11:17697: read: connection reset by peer" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.962177 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.964219 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2" exitCode=255 Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.964297 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2"} Jan 29 09:06:42 crc kubenswrapper[4771]: I0129 09:06:42.989287 4771 scope.go:117] "RemoveContainer" containerID="a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.681171 4771 apiserver.go:52] "Watching apiserver" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.684829 4771 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.685329 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.686048 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.686168 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.686218 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.686193 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.686513 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.686628 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.686734 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.686760 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.686934 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.692819 4771 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.694797 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.699053 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.701325 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.704810 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.704907 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.704936 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.705377 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706486 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706802 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706827 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706857 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706882 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706882 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706903 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706925 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706945 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706967 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706989 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707008 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707028 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707047 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707067 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707090 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707110 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707128 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707145 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707163 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707179 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707197 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707213 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707230 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707248 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707264 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707280 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707298 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707315 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707330 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707349 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707368 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707383 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707401 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707422 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707441 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707457 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707476 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707500 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707525 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707548 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707569 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707588 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707681 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707722 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707854 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707949 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707968 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.707987 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708007 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708032 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708054 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708088 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708124 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708141 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708172 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708189 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708207 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708223 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708255 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708270 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708285 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708299 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708316 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708338 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708356 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708374 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708397 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708420 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708437 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708454 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708473 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708491 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708508 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708524 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708540 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708563 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708596 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708622 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708641 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708665 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708706 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708737 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708817 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708915 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708934 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708956 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.708992 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709018 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709040 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709119 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709145 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709165 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709184 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709202 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709219 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709250 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709304 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709327 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709345 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709366 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709384 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709405 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709431 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709467 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709519 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709548 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709571 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709599 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709616 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709636 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709661 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709688 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709675 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709724 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709780 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709824 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709853 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709911 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709943 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709972 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709997 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710028 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710055 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710079 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710105 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710137 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710161 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710164 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710186 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710214 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710241 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710267 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710297 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710321 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710347 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710373 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710397 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710420 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710445 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710467 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710488 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710518 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710545 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710572 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710599 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710630 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710656 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710684 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710728 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710758 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710782 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710806 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710831 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710857 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710881 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710906 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710933 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710959 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710989 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711014 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711038 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711063 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711089 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711114 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711137 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711163 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711212 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711236 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711259 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711282 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711434 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711465 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711490 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711513 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711537 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711559 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711582 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711609 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711637 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711662 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711690 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711744 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711769 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711792 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711817 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711843 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711869 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711893 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711922 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711950 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711980 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712007 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712031 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712055 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712082 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712189 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712221 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712260 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712288 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712310 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712332 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712361 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712383 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712403 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712422 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712446 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712467 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712489 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712569 4771 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712583 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712595 4771 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712607 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712618 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.717937 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.730958 4771 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.738203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.738993 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710204 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710513 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.705642 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.741799 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.742738 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.742877 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.742960 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.743018 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.705686 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.743462 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.705850 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.744083 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.705970 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706127 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706148 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706176 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:54:10.391885589 +0000 UTC Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.706413 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709685 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710564 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710853 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.710905 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711297 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711459 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711583 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711776 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711836 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711924 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.711940 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712195 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712501 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712563 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712847 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712886 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.712979 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.713167 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.713268 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.713350 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.713469 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.713565 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.713680 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.713798 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.714058 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.714178 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.714251 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.714279 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.714686 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.714893 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.705586 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.715556 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.715842 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.716044 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.716066 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.716488 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.716535 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.716685 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.716800 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.716946 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.716993 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.717023 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.717415 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.717522 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.718715 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.718988 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.719165 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.719365 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.719497 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.721034 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.721125 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.722285 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.722352 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.722477 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.722634 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.722649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.722825 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.722994 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.723048 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.723144 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.723266 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.723573 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.723624 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.723926 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.724068 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.724107 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.724150 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.724276 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.724439 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.724669 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.724742 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.725070 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.725192 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.725434 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.725467 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.725774 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.725890 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:06:44.225866021 +0000 UTC m=+24.348706248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.746155 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.726070 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.726354 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.726680 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.746403 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.746721 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.748962 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.749244 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.749572 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.749970 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.727186 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.727343 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.727784 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.728172 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.728511 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.728825 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.728896 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.729086 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.729287 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.729563 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.729669 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.729780 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.729805 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.729641 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.730072 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.750483 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.730386 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.730597 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.730606 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.731131 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.731127 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.731380 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.731926 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.732011 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.732295 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.732326 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.732446 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.732478 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.732519 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.732552 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.733326 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.733515 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.733862 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.734073 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.734074 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.734842 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.734875 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.734673 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.735083 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.735326 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.735711 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.735779 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.736163 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.736257 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.736288 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.736514 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.736523 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.709671 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.736891 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.736969 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.737127 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.737733 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.739424 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.739863 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740012 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740052 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740251 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740341 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740379 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740507 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740636 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740682 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740874 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.740964 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.741008 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.741006 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.751038 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.751211 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.751533 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:44.251427836 +0000 UTC m=+24.374268063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.751569 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.752007 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.752029 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.752155 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:44.252128586 +0000 UTC m=+24.374968813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.752197 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.752449 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.752593 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.752601 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.752849 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.753220 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.754175 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.755192 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.755369 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.758144 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.759250 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.759408 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.759448 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.759494 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.759513 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.759620 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:44.259591812 +0000 UTC m=+24.382432239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.759746 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.759856 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.760910 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.761987 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.762527 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.762951 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.763267 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.763327 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.763560 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.763663 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.763782 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.763964 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.764186 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.765101 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.766919 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.767159 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.767193 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.767204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.767370 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.768816 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.769238 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.769267 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.769285 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.769347 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:44.269327501 +0000 UTC m=+24.392167728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.772539 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.774393 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.776882 4771 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.776926 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.777000 4771 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.779880 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.779917 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.779927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.779968 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.779987 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:43Z","lastTransitionTime":"2026-01-29T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.785433 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.787875 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.794204 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.795320 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.796968 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.801435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.801482 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.801494 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.801511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.801522 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:43Z","lastTransitionTime":"2026-01-29T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:43 crc kubenswrapper[4771]: W0129 09:06:43.803177 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-effcefda02954c7835cce211fb1fcfd658ea49ada067a8207e7ae759c9af48ce WatchSource:0}: Error finding container effcefda02954c7835cce211fb1fcfd658ea49ada067a8207e7ae759c9af48ce: Status 404 returned error can't find the container with id effcefda02954c7835cce211fb1fcfd658ea49ada067a8207e7ae759c9af48ce Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.809618 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.811739 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813246 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813288 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813444 4771 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813468 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813484 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813498 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813510 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813522 4771 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813534 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813544 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813555 4771 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813566 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813575 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813584 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813594 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813605 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813616 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813626 4771 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813637 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813645 4771 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813655 4771 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813663 4771 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813671 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813680 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813707 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813718 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813726 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813734 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813743 4771 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813752 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813761 4771 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813769 4771 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813778 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813786 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813796 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813805 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813821 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813831 4771 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813839 4771 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813849 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813857 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813865 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813875 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813883 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813892 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813901 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813909 4771 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813917 4771 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813926 4771 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813935 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813943 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813952 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813960 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813968 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813978 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813986 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.813994 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814009 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814018 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814026 4771 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814038 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814050 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814062 4771 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814077 4771 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814090 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814104 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814117 4771 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814130 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814160 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814173 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814187 4771 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814200 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814212 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814225 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814240 4771 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814251 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814264 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814275 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814285 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814295 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814306 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814318 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814330 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814342 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814353 4771 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814363 4771 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814373 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814382 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814392 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814401 4771 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814413 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814427 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814439 4771 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814451 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814463 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814477 4771 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814489 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814501 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814514 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814528 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814508 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814177 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.814725 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815014 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815069 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815091 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815107 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815121 4771 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815136 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815151 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815166 4771 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815180 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815195 4771 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815209 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815221 4771 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815233 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815246 4771 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815258 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815271 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815283 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815298 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815311 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815360 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815371 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815381 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815390 4771 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815401 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815411 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815421 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815431 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815441 4771 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815451 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815462 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815474 4771 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815484 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815496 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815505 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815516 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815526 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815535 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815545 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815554 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815564 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815573 4771 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815583 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815594 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815608 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815626 4771 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815637 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815651 4771 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815664 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815676 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815687 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815716 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815728 4771 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815743 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815756 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815770 4771 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815801 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815815 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815836 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815784 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815850 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:43Z","lastTransitionTime":"2026-01-29T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815877 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815975 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.815989 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816003 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816018 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816031 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816045 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816058 4771 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816074 4771 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816086 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816102 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816114 4771 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816128 4771 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816141 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816154 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816167 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816180 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816195 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816209 4771 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816222 4771 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816236 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816250 4771 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816264 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816277 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816291 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816304 4771 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816318 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816334 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816347 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816361 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816375 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816389 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816402 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.816416 4771 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.820109 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.828341 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.830708 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.832460 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.832527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.832540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.832567 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.832579 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:43Z","lastTransitionTime":"2026-01-29T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.842589 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.847202 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.847269 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.847284 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.847313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.847326 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:43Z","lastTransitionTime":"2026-01-29T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.858256 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:43 crc kubenswrapper[4771]: E0129 09:06:43.858408 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.860636 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.860713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.860732 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.860757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.860773 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:43Z","lastTransitionTime":"2026-01-29T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.964455 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.964509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.964519 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.964537 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.964550 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:43Z","lastTransitionTime":"2026-01-29T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.975387 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"effcefda02954c7835cce211fb1fcfd658ea49ada067a8207e7ae759c9af48ce"} Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.982533 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.993550 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f"} Jan 29 09:06:43 crc kubenswrapper[4771]: I0129 09:06:43.994260 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.012081 4771 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.012085 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.030663 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.031100 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:44 crc kubenswrapper[4771]: W0129 09:06:44.048752 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-4715aee97d35f32b48ea4642a6dc15929adc2b14e4d619c4a9569892589bf929 WatchSource:0}: Error finding container 4715aee97d35f32b48ea4642a6dc15929adc2b14e4d619c4a9569892589bf929: Status 404 returned error can't find the container with id 4715aee97d35f32b48ea4642a6dc15929adc2b14e4d619c4a9569892589bf929 Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.052856 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.067478 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.069399 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.069438 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.069450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.069523 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.069538 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.081904 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.084245 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.106140 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:44 crc kubenswrapper[4771]: W0129 09:06:44.131976 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-52bdab64b98da211deed31e8f55ed9078f4ca47891987dfe59d77289095bb0c2 WatchSource:0}: Error finding container 52bdab64b98da211deed31e8f55ed9078f4ca47891987dfe59d77289095bb0c2: Status 404 returned error can't find the container with id 52bdab64b98da211deed31e8f55ed9078f4ca47891987dfe59d77289095bb0c2 Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.132082 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.172320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.172386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.172406 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.172427 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.172442 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.275166 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.275204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.275214 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.275232 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.275493 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.320990 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.321095 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.321136 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.321166 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.321195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321353 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321375 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321390 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321461 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321518 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:45.321500368 +0000 UTC m=+25.444340595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321560 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321588 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321673 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:06:45.321645372 +0000 UTC m=+25.444485599 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321717 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321753 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:45.321741675 +0000 UTC m=+25.444581902 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321769 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321780 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:45.321771225 +0000 UTC m=+25.444611452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.321844 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:45.321796916 +0000 UTC m=+25.444637143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.378782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.378852 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.378864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.378888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.378899 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.482134 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.482197 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.482211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.482235 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.482251 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.584898 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.584942 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.584952 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.584969 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.584982 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.688018 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.688069 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.688079 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.688099 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.688113 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.746289 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:33:00.323838168 +0000 UTC Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.791740 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.791839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.791853 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.791899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.791916 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.837507 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:44 crc kubenswrapper[4771]: E0129 09:06:44.837729 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.841747 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.842449 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.844205 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.845040 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.846226 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.846776 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.847402 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.848400 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.849154 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.850186 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.850741 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.851906 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.852424 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.852994 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.853963 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.854507 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.855479 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.856001 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.856614 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.857759 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.858251 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.859404 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.860004 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.861087 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.861567 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.862240 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.863361 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.863891 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.865116 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.865680 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.866625 4771 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.866785 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.868504 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.869463 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.869904 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.871478 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.872179 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.873723 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.874357 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.875443 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.876016 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.877030 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.877686 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.878738 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.879322 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.880077 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.880595 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.881425 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.881996 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.882515 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.883082 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.883645 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.884256 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.884781 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.894747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.894793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.894811 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.894831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.894848 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.997099 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.997151 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.997165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.997188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.997206 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:44Z","lastTransitionTime":"2026-01-29T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.997435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"52bdab64b98da211deed31e8f55ed9078f4ca47891987dfe59d77289095bb0c2"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.998986 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d"} Jan 29 09:06:44 crc kubenswrapper[4771]: I0129 09:06:44.999060 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4715aee97d35f32b48ea4642a6dc15929adc2b14e4d619c4a9569892589bf929"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.003265 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.003407 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.022458 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.038582 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.055309 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.072834 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.090048 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.095743 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.106064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.106122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.106136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.106156 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.106171 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.109039 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.118989 4771 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.127462 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.142502 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.157254 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.171165 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.187167 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.203289 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.209604 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.209651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.209662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.209679 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.209711 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.219685 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.240316 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.312428 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.312493 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.312505 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.312526 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.312541 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.333021 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.333156 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.333195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.333224 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333303 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333282 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:06:47.333235904 +0000 UTC m=+27.456076131 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333359 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333378 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333392 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.333444 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333463 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333476 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333484 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333466 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:47.33344606 +0000 UTC m=+27.456286287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333521 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:47.333511422 +0000 UTC m=+27.456351649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333521 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333534 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:47.333527853 +0000 UTC m=+27.456368080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.333650 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:47.333629125 +0000 UTC m=+27.456469522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.415581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.415648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.415662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.415687 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.415725 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.518415 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.518475 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.518485 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.518504 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.518516 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.621667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.621749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.621764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.621793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.621809 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.724320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.724370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.724383 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.724405 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.724418 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.747529 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:59:40.145916657 +0000 UTC Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.821100 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.824276 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.827237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.827276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.827288 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.827307 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.827319 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.833089 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.837311 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.837428 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.837668 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:45 crc kubenswrapper[4771]: E0129 09:06:45.837881 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.839983 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.854757 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.870123 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.884004 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.904818 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.922764 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.930138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.930180 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.930190 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.930207 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.930218 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:45Z","lastTransitionTime":"2026-01-29T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.938567 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.957012 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.977074 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:45 crc kubenswrapper[4771]: I0129 09:06:45.993765 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.010147 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.026785 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.032853 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.032914 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.032928 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.032952 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.032967 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.046581 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.064898 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.082814 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.137767 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.137844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.137865 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.137888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.137900 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.240728 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.240799 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.240810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.240833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.240847 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.343792 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.343830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.343840 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.343856 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.343867 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.447455 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.447512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.447537 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.447559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.447572 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.550231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.550280 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.550293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.550316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.550330 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.652684 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.652752 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.652764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.652786 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.652800 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.747860 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:57:04.971330543 +0000 UTC Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.755206 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.755252 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.755265 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.755286 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.755300 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.837317 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:46 crc kubenswrapper[4771]: E0129 09:06:46.837494 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.857626 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.857681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.857723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.857739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.857756 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.933186 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.945794 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.949643 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.950208 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.960495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.960827 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.960903 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.961001 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.961069 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:46Z","lastTransitionTime":"2026-01-29T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.964823 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.979100 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:46 crc kubenswrapper[4771]: I0129 09:06:46.993502 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.007319 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.021559 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.033468 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.049058 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.062888 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.063805 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.063832 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.063844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.063863 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.063878 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.078256 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.090803 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.105189 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.118565 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.152783 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.166394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.166444 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.166457 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.166477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.166489 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.185803 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.200479 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.221675 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.269334 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.269377 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.269388 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.269407 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.269421 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.353421 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.353566 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353611 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:06:51.353582642 +0000 UTC m=+31.476422869 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.353648 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.353681 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.353726 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353655 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353809 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353816 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353830 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353833 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353844 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353846 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353886 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:51.35387829 +0000 UTC m=+31.476718507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353746 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.353902 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:51.353894261 +0000 UTC m=+31.476734488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.354066 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:51.354051115 +0000 UTC m=+31.476891432 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.354084 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:51.354074406 +0000 UTC m=+31.476914633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.372584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.372664 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.372685 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.372736 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.372753 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.475847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.475909 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.475917 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.475934 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.475949 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.579258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.579314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.579329 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.579351 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.579366 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.682441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.682483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.682492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.682508 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.682520 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.748569 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:01:11.99019281 +0000 UTC Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.785456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.785497 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.785505 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.785521 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.785531 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.837003 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.837183 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.837377 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:47 crc kubenswrapper[4771]: E0129 09:06:47.837459 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.888229 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.888280 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.888294 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.888313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:47 crc kubenswrapper[4771]: I0129 09:06:47.888327 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:47Z","lastTransitionTime":"2026-01-29T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.003208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.003271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.003283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.003302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.003316 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.013967 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.031353 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.047047 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.061923 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.086419 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.103584 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.106176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.106227 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.106239 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.106259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.106270 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.122886 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.136489 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.153277 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.176261 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.209148 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.209198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.209208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.209231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.209242 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.304935 4771 csr.go:261] certificate signing request csr-lgw94 is approved, waiting to be issued Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.312437 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.312495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.312505 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.312530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.312546 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.337454 4771 csr.go:257] certificate signing request csr-lgw94 is issued Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.415531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.415590 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.415600 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.415618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.415629 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.520174 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.520208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.520217 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.520235 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.520248 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.625665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.625726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.625739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.625757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.625769 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.734431 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.734495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.734510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.734533 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.734548 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.751926 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:36:19.259510954 +0000 UTC Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.829530 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-79kz5"] Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.830120 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gzd9l"] Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.830333 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.830350 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gzd9l" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.832625 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.832911 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.833144 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.833161 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.834301 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.834338 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.834353 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.834772 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.836994 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.837033 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.837047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.837071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.837102 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.837087 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: E0129 09:06:48.837223 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.860178 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.876531 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbmgv\" (UniqueName: \"kubernetes.io/projected/a2239f9c-5e91-409a-a0bc-680754704c77-kube-api-access-hbmgv\") pod \"node-resolver-gzd9l\" (UID: \"a2239f9c-5e91-409a-a0bc-680754704c77\") " pod="openshift-dns/node-resolver-gzd9l" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.876610 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-proxy-tls\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.876638 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a2239f9c-5e91-409a-a0bc-680754704c77-hosts-file\") pod \"node-resolver-gzd9l\" (UID: \"a2239f9c-5e91-409a-a0bc-680754704c77\") " pod="openshift-dns/node-resolver-gzd9l" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.876662 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpkc\" (UniqueName: \"kubernetes.io/projected/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-kube-api-access-gfpkc\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.876729 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-rootfs\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.876909 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-mcd-auth-proxy-config\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.882328 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.910511 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.930984 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.939795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.939848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.939861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.939883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.939898 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:48Z","lastTransitionTime":"2026-01-29T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.954927 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.978417 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-mcd-auth-proxy-config\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.978485 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-proxy-tls\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.978513 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a2239f9c-5e91-409a-a0bc-680754704c77-hosts-file\") pod \"node-resolver-gzd9l\" (UID: \"a2239f9c-5e91-409a-a0bc-680754704c77\") " pod="openshift-dns/node-resolver-gzd9l" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.978536 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbmgv\" (UniqueName: \"kubernetes.io/projected/a2239f9c-5e91-409a-a0bc-680754704c77-kube-api-access-hbmgv\") pod \"node-resolver-gzd9l\" (UID: \"a2239f9c-5e91-409a-a0bc-680754704c77\") " pod="openshift-dns/node-resolver-gzd9l" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.978562 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpkc\" (UniqueName: \"kubernetes.io/projected/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-kube-api-access-gfpkc\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.978590 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-rootfs\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.978685 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-rootfs\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.978745 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a2239f9c-5e91-409a-a0bc-680754704c77-hosts-file\") pod \"node-resolver-gzd9l\" (UID: \"a2239f9c-5e91-409a-a0bc-680754704c77\") " pod="openshift-dns/node-resolver-gzd9l" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.979675 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-mcd-auth-proxy-config\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.980003 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.988333 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-proxy-tls\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:48 crc kubenswrapper[4771]: I0129 09:06:48.999407 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbmgv\" (UniqueName: \"kubernetes.io/projected/a2239f9c-5e91-409a-a0bc-680754704c77-kube-api-access-hbmgv\") pod \"node-resolver-gzd9l\" (UID: \"a2239f9c-5e91-409a-a0bc-680754704c77\") " pod="openshift-dns/node-resolver-gzd9l" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.012265 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpkc\" (UniqueName: \"kubernetes.io/projected/12eedc7e-dceb-4fc2-b26a-5a4a87846b1a-kube-api-access-gfpkc\") pod \"machine-config-daemon-79kz5\" (UID: \"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\") " pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.016803 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.033617 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.042977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.043033 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.043043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.043063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.043076 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.047626 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.062773 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.098139 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.116082 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.132798 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.145460 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.145514 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.145525 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.145545 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.145558 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.146595 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.161135 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gzd9l" Jan 29 09:06:49 crc kubenswrapper[4771]: W0129 09:06:49.162119 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12eedc7e_dceb_4fc2_b26a_5a4a87846b1a.slice/crio-bc2cc6e1f0e83b5022bcc3996a676dbad5239d530354cc3a18c8186349fa0fdd WatchSource:0}: Error finding container bc2cc6e1f0e83b5022bcc3996a676dbad5239d530354cc3a18c8186349fa0fdd: Status 404 returned error can't find the container with id bc2cc6e1f0e83b5022bcc3996a676dbad5239d530354cc3a18c8186349fa0fdd Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.164278 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: W0129 09:06:49.173263 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2239f9c_5e91_409a_a0bc_680754704c77.slice/crio-3da3e6fa1d97ab928a1f9553e6a112a0175a6ac219b51c16ddb85e1ecdf8556a WatchSource:0}: Error finding container 3da3e6fa1d97ab928a1f9553e6a112a0175a6ac219b51c16ddb85e1ecdf8556a: Status 404 returned error can't find the container with id 3da3e6fa1d97ab928a1f9553e6a112a0175a6ac219b51c16ddb85e1ecdf8556a Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.177871 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.194739 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.220834 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.245715 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.248726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.248762 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.248772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.248789 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.248801 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.278224 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-cfc8z"] Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.278678 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.286383 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kx4bn"] Jan 29 09:06:49 crc kubenswrapper[4771]: W0129 09:06:49.287443 4771 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 09:06:49 crc kubenswrapper[4771]: E0129 09:06:49.287480 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 09:06:49 crc kubenswrapper[4771]: W0129 09:06:49.287728 4771 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 09:06:49 crc kubenswrapper[4771]: E0129 09:06:49.287793 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 09:06:49 crc kubenswrapper[4771]: W0129 09:06:49.288302 4771 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 09:06:49 crc kubenswrapper[4771]: E0129 09:06:49.288383 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.289114 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: W0129 09:06:49.289652 4771 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 09:06:49 crc kubenswrapper[4771]: E0129 09:06:49.289752 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 09:06:49 crc kubenswrapper[4771]: W0129 09:06:49.291009 4771 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 09:06:49 crc kubenswrapper[4771]: E0129 09:06:49.291040 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 09:06:49 crc kubenswrapper[4771]: W0129 09:06:49.292305 4771 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 09:06:49 crc kubenswrapper[4771]: E0129 09:06:49.292351 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.299050 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.303735 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.339549 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 09:01:48 +0000 UTC, rotation deadline is 2026-12-10 18:16:22.278372988 +0000 UTC Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.339623 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7569h9m32.938752617s for next certificate rotation Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.358536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.358585 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.358597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.358618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.358632 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382155 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-k8s-cni-cncf-io\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382177 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-kubelet\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382195 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382212 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cni-binary-copy\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382236 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382386 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-netns\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382443 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-cni-multus\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382462 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-multus-certs\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382521 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-os-release\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382559 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-os-release\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382582 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mkfg\" (UniqueName: \"kubernetes.io/projected/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-kube-api-access-2mkfg\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382618 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-system-cni-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382643 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-socket-dir-parent\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382754 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382785 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-daemon-config\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382803 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-etc-kubernetes\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382839 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-hostroot\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382901 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-cnibin\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382944 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmd8f\" (UniqueName: \"kubernetes.io/projected/f580e781-76de-491c-a6e6-7295469d366a-kube-api-access-vmd8f\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382971 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-cni-bin\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.382999 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-conf-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.383088 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-cni-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.383127 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cnibin\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.420619 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.462448 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.462504 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.462519 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.462540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.462554 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.469442 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483745 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-system-cni-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483796 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-socket-dir-parent\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483824 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483854 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-daemon-config\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483874 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-etc-kubernetes\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483893 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-hostroot\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-cni-bin\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483930 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-cnibin\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483948 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmd8f\" (UniqueName: \"kubernetes.io/projected/f580e781-76de-491c-a6e6-7295469d366a-kube-api-access-vmd8f\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483966 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-cni-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483983 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cnibin\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.483999 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-conf-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484023 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-k8s-cni-cncf-io\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484038 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-cni-bin\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484043 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484064 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-system-cni-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484138 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-socket-dir-parent\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484192 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-conf-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484099 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cni-binary-copy\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484240 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-k8s-cni-cncf-io\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484095 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-cnibin\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484167 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cnibin\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484273 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-kubelet\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484138 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-etc-kubernetes\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484253 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-kubelet\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484323 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-cni-dir\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484343 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484161 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-hostroot\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484380 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484414 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-multus-certs\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484430 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484444 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-netns\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484467 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-multus-certs\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484470 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-cni-multus\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484491 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-var-lib-cni-multus\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484498 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-os-release\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484530 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-host-run-netns\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484546 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-os-release\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484586 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mkfg\" (UniqueName: \"kubernetes.io/projected/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-kube-api-access-2mkfg\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484593 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-os-release\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484633 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-os-release\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.484972 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f580e781-76de-491c-a6e6-7295469d366a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.565706 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.565769 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.565781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.565801 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.565812 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.592204 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.638122 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.658493 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.669244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.669293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.669303 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.669321 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.669333 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.680920 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.700154 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.715617 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.734178 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.752599 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:07:50.873685724 +0000 UTC Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.766402 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.773262 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.773319 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.773333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.773354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.773405 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.787036 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.808140 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.813587 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ntlqb"] Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.814636 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.817151 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.817256 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.817383 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.817602 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.819342 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.821957 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.822011 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.830399 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.837815 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.837929 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:49 crc kubenswrapper[4771]: E0129 09:06:49.837975 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:49 crc kubenswrapper[4771]: E0129 09:06:49.838267 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.846645 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.864891 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.876494 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.876540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.876549 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.876569 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.876582 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.883055 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.888531 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-netd\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.888583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.888757 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-env-overrides\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.888810 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-script-lib\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.888883 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-systemd-units\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.888909 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-var-lib-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.888968 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-etc-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889007 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovn-node-metrics-cert\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889044 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-slash\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889074 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-systemd\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-log-socket\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889125 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889165 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-netns\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889191 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-ovn\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889216 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-config\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889242 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd469\" (UniqueName: \"kubernetes.io/projected/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-kube-api-access-rd469\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889337 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-node-log\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889455 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-bin\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889526 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.889576 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-kubelet\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.897932 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.912204 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.930719 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.944324 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.956628 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.968179 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.980119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.980167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.980178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.980198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.980211 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:49Z","lastTransitionTime":"2026-01-29T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.984636 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.990604 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd469\" (UniqueName: \"kubernetes.io/projected/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-kube-api-access-rd469\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-netns\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991068 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-netns\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991196 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-ovn\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991296 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-ovn\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991426 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-config\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991539 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-node-log\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991751 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-bin\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991896 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992045 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-kubelet\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992214 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-netd\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992332 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992496 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-env-overrides\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992610 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-script-lib\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992860 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-systemd-units\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.993580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-var-lib-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.993773 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-etc-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.993900 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovn-node-metrics-cert\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.993990 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-systemd\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994068 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-log-socket\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994141 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994227 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-slash\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992947 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-systemd-units\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994515 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-slash\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.993534 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-script-lib\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994085 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-etc-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992267 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-netd\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994111 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-var-lib-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991995 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992551 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-config\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992441 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-openvswitch\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.992168 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-kubelet\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994437 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-systemd\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991680 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-node-log\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.991863 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-bin\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994471 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-log-socket\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.994491 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:49 crc kubenswrapper[4771]: I0129 09:06:49.993198 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-env-overrides\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.001098 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovn-node-metrics-cert\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.018194 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.021472 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd469\" (UniqueName: \"kubernetes.io/projected/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-kube-api-access-rd469\") pod \"ovnkube-node-ntlqb\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.025085 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gzd9l" event={"ID":"a2239f9c-5e91-409a-a0bc-680754704c77","Type":"ContainerStarted","Data":"8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.025302 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gzd9l" event={"ID":"a2239f9c-5e91-409a-a0bc-680754704c77","Type":"ContainerStarted","Data":"3da3e6fa1d97ab928a1f9553e6a112a0175a6ac219b51c16ddb85e1ecdf8556a"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.030441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.030516 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.030529 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"bc2cc6e1f0e83b5022bcc3996a676dbad5239d530354cc3a18c8186349fa0fdd"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.046900 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.076671 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.082558 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.082885 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.082962 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.083031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.083112 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.100647 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.114280 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.125256 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.127352 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.139462 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.143645 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff7f16f4_439f_4743_b5f2_b9c6f6c346f5.slice/crio-628e5852b78d5cb1a949025b880e6f3b6e5e88212f9df3deef3cf867a9bb474e WatchSource:0}: Error finding container 628e5852b78d5cb1a949025b880e6f3b6e5e88212f9df3deef3cf867a9bb474e: Status 404 returned error can't find the container with id 628e5852b78d5cb1a949025b880e6f3b6e5e88212f9df3deef3cf867a9bb474e Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.155906 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.178100 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.197989 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.198051 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.198063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.198084 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.198101 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.202500 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.221314 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.248961 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.266112 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.291502 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.298209 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.300581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.300812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.300904 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.300981 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.301087 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.310826 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.315843 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.324618 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.336064 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.356362 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.383111 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.396002 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.406907 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.407087 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.407179 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.407254 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.407353 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.464516 4771 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.464899 4771 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.465172 4771 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.465219 4771 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.465632 4771 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.465680 4771 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.465733 4771 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.466212 4771 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.466329 4771 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: W0129 09:06:50.466398 4771 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.489623 4771 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.489623 4771 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.489633 4771 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.489783 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-binary-copy podName:f580e781-76de-491c-a6e6-7295469d366a nodeName:}" failed. No retries permitted until 2026-01-29 09:06:50.9897508 +0000 UTC m=+31.112591027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-binary-copy") pod "multus-additional-cni-plugins-kx4bn" (UID: "f580e781-76de-491c-a6e6-7295469d366a") : failed to sync configmap cache: timed out waiting for the condition Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.490001 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-daemon-config podName:a46c7969-6ce3-4ba5-a1ab-73bbf487ae73 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:50.989984386 +0000 UTC m=+31.112824613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-daemon-config") pod "multus-cfc8z" (UID: "a46c7969-6ce3-4ba5-a1ab-73bbf487ae73") : failed to sync configmap cache: timed out waiting for the condition Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.490023 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cni-binary-copy podName:a46c7969-6ce3-4ba5-a1ab-73bbf487ae73 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:50.990013857 +0000 UTC m=+31.112854084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cni-binary-copy") pod "multus-cfc8z" (UID: "a46c7969-6ce3-4ba5-a1ab-73bbf487ae73") : failed to sync configmap cache: timed out waiting for the condition Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.490049 4771 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.490094 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-sysctl-allowlist podName:f580e781-76de-491c-a6e6-7295469d366a nodeName:}" failed. No retries permitted until 2026-01-29 09:06:50.990082609 +0000 UTC m=+31.112922836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-kx4bn" (UID: "f580e781-76de-491c-a6e6-7295469d366a") : failed to sync configmap cache: timed out waiting for the condition Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.510914 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.510963 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.510974 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.510992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.511004 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.515239 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.540975 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.563802 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.569457 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mkfg\" (UniqueName: \"kubernetes.io/projected/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-kube-api-access-2mkfg\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.570093 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmd8f\" (UniqueName: \"kubernetes.io/projected/f580e781-76de-491c-a6e6-7295469d366a-kube-api-access-vmd8f\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.598201 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.613850 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.613899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.613910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.613931 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.613943 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.717171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.717233 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.717245 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.717267 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.717282 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.753886 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:27:07.036343727 +0000 UTC Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.820470 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.820540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.820551 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.820570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.820582 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.837298 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:50 crc kubenswrapper[4771]: E0129 09:06:50.837473 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.858556 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.883144 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.897485 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.910098 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.923996 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.925319 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.925370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.925379 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.925404 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.925418 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:50Z","lastTransitionTime":"2026-01-29T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.940671 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.955414 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.968855 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.982390 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:50 crc kubenswrapper[4771]: I0129 09:06:50.999736 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.011239 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.011300 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-daemon-config\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.011347 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.011374 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cni-binary-copy\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.012267 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-cni-binary-copy\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.012311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.012311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a46c7969-6ce3-4ba5-a1ab-73bbf487ae73-multus-daemon-config\") pod \"multus-cfc8z\" (UID: \"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\") " pod="openshift-multus/multus-cfc8z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.012483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f580e781-76de-491c-a6e6-7295469d366a-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4bn\" (UID: \"f580e781-76de-491c-a6e6-7295469d366a\") " pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.022514 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.027891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.027948 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.027960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.027981 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.027991 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.034622 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254" exitCode=0 Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.034687 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.034757 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"628e5852b78d5cb1a949025b880e6f3b6e5e88212f9df3deef3cf867a9bb474e"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.044050 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.075740 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.091712 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.092235 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cfc8z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.106236 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.106312 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.124223 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: W0129 09:06:51.129907 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf580e781_76de_491c_a6e6_7295469d366a.slice/crio-5681dc878a14dd413534a7449b6b435ded1d916f34178a2174586f8ff10fc376 WatchSource:0}: Error finding container 5681dc878a14dd413534a7449b6b435ded1d916f34178a2174586f8ff10fc376: Status 404 returned error can't find the container with id 5681dc878a14dd413534a7449b6b435ded1d916f34178a2174586f8ff10fc376 Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.130646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.130676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.130686 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.130725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.130741 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.140067 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.151527 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.164831 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.183023 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.204201 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.222973 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.233147 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.233224 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.233237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.233261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.233275 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.239220 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.255286 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.269720 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.282708 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.284233 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.292327 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.295737 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.318755 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.335803 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.335848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.335858 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.335879 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.335921 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.419774 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.419941 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:06:59.419915285 +0000 UTC m=+39.542755512 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.419995 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.420020 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.420043 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.420067 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420157 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420174 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420174 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420192 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420203 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:59.420193982 +0000 UTC m=+39.543034209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420209 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420242 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:59.420231773 +0000 UTC m=+39.543072000 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420205 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420248 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420371 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:59.420342316 +0000 UTC m=+39.543182723 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420262 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.420442 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:06:59.420433549 +0000 UTC m=+39.543273996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.438722 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.439110 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.439120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.439141 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.439154 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.467287 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.526375 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.543249 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.543285 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.543295 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.543311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.543323 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.609517 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.646467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.646519 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.646528 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.646546 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.646557 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.675987 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.718902 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.750532 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.750570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.750585 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.750602 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.750618 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.759143 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:45:49.339567914 +0000 UTC Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.838508 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.838531 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.839201 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:51 crc kubenswrapper[4771]: E0129 09:06:51.839305 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.856394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.856430 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.856441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.856458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.856470 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.959656 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.959715 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.959735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.959757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:51 crc kubenswrapper[4771]: I0129 09:06:51.959769 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:51Z","lastTransitionTime":"2026-01-29T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.039352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerStarted","Data":"635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.039411 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerStarted","Data":"5681dc878a14dd413534a7449b6b435ded1d916f34178a2174586f8ff10fc376"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.043427 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cfc8z" event={"ID":"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73","Type":"ContainerStarted","Data":"1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.043456 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cfc8z" event={"ID":"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73","Type":"ContainerStarted","Data":"2e065693e4c4f0ac78964b36dd34232f49a40bc8fb923e5dd94169ff7bed06ed"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.047102 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.047169 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.047185 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.047195 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.047204 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.057297 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.069910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.069985 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.070008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.070049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.070061 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.080588 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.103961 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.118119 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.134572 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.150028 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.152478 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.164523 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.172916 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.173036 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.173052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.173075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.173113 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.178777 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.195827 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.223462 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.240993 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.257334 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.276151 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.276728 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.276759 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.276770 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.276828 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.276840 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.292381 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.308505 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.325139 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.341715 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.358375 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.374773 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.379662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.379740 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.379753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.379776 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.379789 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.391851 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.423077 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.467558 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.482527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.482589 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.482602 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.482624 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.482640 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.494452 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ksdpd"] Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.494961 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.496357 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.499686 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.499684 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.500546 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.502187 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.516107 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.535733 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.552459 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.572640 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.584916 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.585622 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.585675 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.585686 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.585724 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.585741 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.599834 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.615275 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.634000 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43b02b53-246f-4869-8463-729e36aff07e-host\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.634074 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85w6l\" (UniqueName: \"kubernetes.io/projected/43b02b53-246f-4869-8463-729e36aff07e-kube-api-access-85w6l\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.634101 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/43b02b53-246f-4869-8463-729e36aff07e-serviceca\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.641770 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.661240 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.676232 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.689378 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.689436 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.689447 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.689469 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.689483 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.703076 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.735266 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43b02b53-246f-4869-8463-729e36aff07e-host\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.735340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/43b02b53-246f-4869-8463-729e36aff07e-serviceca\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.735363 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85w6l\" (UniqueName: \"kubernetes.io/projected/43b02b53-246f-4869-8463-729e36aff07e-kube-api-access-85w6l\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.735500 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43b02b53-246f-4869-8463-729e36aff07e-host\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.736733 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/43b02b53-246f-4869-8463-729e36aff07e-serviceca\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.744056 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.760362 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:58:02.46386837 +0000 UTC Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.770547 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85w6l\" (UniqueName: \"kubernetes.io/projected/43b02b53-246f-4869-8463-729e36aff07e-kube-api-access-85w6l\") pod \"node-ca-ksdpd\" (UID: \"43b02b53-246f-4869-8463-729e36aff07e\") " pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.794064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.794113 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.794126 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.794146 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.794160 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.804868 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.810080 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ksdpd" Jan 29 09:06:52 crc kubenswrapper[4771]: W0129 09:06:52.825679 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43b02b53_246f_4869_8463_729e36aff07e.slice/crio-37329b761100dffa23e103d22ef8713642d212aab205c6eb1b912ca95fe80472 WatchSource:0}: Error finding container 37329b761100dffa23e103d22ef8713642d212aab205c6eb1b912ca95fe80472: Status 404 returned error can't find the container with id 37329b761100dffa23e103d22ef8713642d212aab205c6eb1b912ca95fe80472 Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.837434 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:52 crc kubenswrapper[4771]: E0129 09:06:52.837620 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.843320 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.889820 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.898221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.898250 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.898259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.898275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.898285 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:52Z","lastTransitionTime":"2026-01-29T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.928073 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:52 crc kubenswrapper[4771]: I0129 09:06:52.962255 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.003716 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.004180 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.004557 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.004739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.004831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.004903 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.045002 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.054273 4771 generic.go:334] "Generic (PLEG): container finished" podID="f580e781-76de-491c-a6e6-7295469d366a" containerID="635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04" exitCode=0 Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.054371 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerDied","Data":"635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.061605 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.064803 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ksdpd" event={"ID":"43b02b53-246f-4869-8463-729e36aff07e","Type":"ContainerStarted","Data":"37329b761100dffa23e103d22ef8713642d212aab205c6eb1b912ca95fe80472"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.089063 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.108613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.109389 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.109435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.109469 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.109493 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.131838 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.173666 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.205614 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.214055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.214137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.214149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.214171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.214185 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.243468 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.255798 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.306456 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.318623 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.318674 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.318712 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.318742 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.318757 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.349831 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.383487 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.421783 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.421841 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.421854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.421880 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.421894 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.421900 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.470393 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.501671 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.524742 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.524801 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.524814 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.524836 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.524849 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.541672 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.584774 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.624335 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.627410 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.627465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.627475 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.627493 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.627507 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.660588 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.703674 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.730573 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.730616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.730625 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.730643 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.730655 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.760862 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:41:05.819292662 +0000 UTC Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.834219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.834283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.834296 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.834321 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.834337 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.837478 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.837483 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:53 crc kubenswrapper[4771]: E0129 09:06:53.837604 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:53 crc kubenswrapper[4771]: E0129 09:06:53.837765 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.910177 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.910236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.910250 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.910274 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.910290 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: E0129 09:06:53.923649 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.927609 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.927649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.927663 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.927681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.927711 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: E0129 09:06:53.941990 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.952536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.952584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.952596 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.952618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.952629 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: E0129 09:06:53.965814 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.969791 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.969831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.969844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.969866 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.969882 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:53 crc kubenswrapper[4771]: E0129 09:06:53.982430 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.986764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.986814 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.986830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.986853 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:53 crc kubenswrapper[4771]: I0129 09:06:53.986864 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:53Z","lastTransitionTime":"2026-01-29T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: E0129 09:06:54.003302 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: E0129 09:06:54.003432 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.009769 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.009806 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.009820 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.009839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.009852 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.070376 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ksdpd" event={"ID":"43b02b53-246f-4869-8463-729e36aff07e","Type":"ContainerStarted","Data":"6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.072352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerStarted","Data":"78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.086186 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.101574 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.112782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.112870 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.112884 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.112904 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.112919 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.118310 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.133443 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.152651 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.174052 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.191241 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.208055 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.215316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.215394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.215410 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.215438 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.215453 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.224394 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.250801 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.280119 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.297024 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.320208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.320291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.320305 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.320331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.320348 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.323103 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.342867 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.368169 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.386936 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.402056 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.425706 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.425791 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.425802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.425824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.425842 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.427920 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.462192 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.509961 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.529080 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.529132 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.529143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.529165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.529180 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.545858 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.584772 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.623792 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.632168 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.632216 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.632226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.632247 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.632259 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.664804 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.703456 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.735253 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.735319 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.735330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.735349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.735360 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.749406 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.762084 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:31:22.745871296 +0000 UTC Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.786079 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.825346 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.837163 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:54 crc kubenswrapper[4771]: E0129 09:06:54.837376 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.838616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.838686 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.838731 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.838763 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.838779 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.862910 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.905233 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.941858 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.941915 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.941931 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.941954 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:54 crc kubenswrapper[4771]: I0129 09:06:54.941966 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:54Z","lastTransitionTime":"2026-01-29T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.044649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.044725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.044740 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.044762 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.044779 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.079146 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.086042 4771 generic.go:334] "Generic (PLEG): container finished" podID="f580e781-76de-491c-a6e6-7295469d366a" containerID="78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488" exitCode=0 Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.086153 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerDied","Data":"78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.100536 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.126809 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.142986 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.147627 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.147660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.147672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.147692 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.147728 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.162025 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.177122 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.189850 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.211977 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.237920 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.250628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.250687 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.250751 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.250774 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.250787 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.265364 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.303970 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.342721 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.353275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.353330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.353342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.353368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.353381 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.383720 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.426822 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.456791 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.456847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.456857 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.456879 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.456893 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.460057 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.501133 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.559641 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.559805 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.559816 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.559835 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.559846 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.662188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.662236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.662247 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.662266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.662276 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.762726 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:28:39.777456302 +0000 UTC Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.764658 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.764719 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.764737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.764757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.764769 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.837871 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.837933 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:55 crc kubenswrapper[4771]: E0129 09:06:55.838062 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:55 crc kubenswrapper[4771]: E0129 09:06:55.838190 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.867427 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.867471 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.867485 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.867503 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.867515 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.970360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.970416 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.970426 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.970447 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.970460 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:55Z","lastTransitionTime":"2026-01-29T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.974724 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:06:55 crc kubenswrapper[4771]: I0129 09:06:55.995211 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.017799 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.036321 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.054933 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.070141 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.072985 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.073038 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.073056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.073081 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.073093 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.087279 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.095007 4771 generic.go:334] "Generic (PLEG): container finished" podID="f580e781-76de-491c-a6e6-7295469d366a" containerID="2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f" exitCode=0 Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.095081 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerDied","Data":"2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.105613 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.121310 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.144673 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.158336 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.171252 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.178654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.178739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.178759 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.178786 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.178812 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.184633 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.201519 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.212439 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.223499 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.235133 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.251428 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.266082 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.281266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.281324 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.281337 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.281360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.281376 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.282186 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.310379 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.344794 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.383662 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.386306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.386352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.386363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.386385 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.386395 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.425036 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.462400 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.489572 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.489633 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.489646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.489667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.489680 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.507308 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.543807 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.584354 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.595619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.595661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.595676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.595740 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.595756 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.664480 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.682907 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.698249 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.698303 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.698313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.698332 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.698344 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.705813 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.763546 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:33:50.263093118 +0000 UTC Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.800881 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.800938 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.800955 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.800983 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.801002 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.837854 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:56 crc kubenswrapper[4771]: E0129 09:06:56.838030 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.903822 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.903888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.903899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.903918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:56 crc kubenswrapper[4771]: I0129 09:06:56.903931 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:56Z","lastTransitionTime":"2026-01-29T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.006468 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.006535 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.006558 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.006584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.006598 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.101516 4771 generic.go:334] "Generic (PLEG): container finished" podID="f580e781-76de-491c-a6e6-7295469d366a" containerID="d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e" exitCode=0 Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.101594 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerDied","Data":"d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.109830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.109881 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.109897 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.109920 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.109938 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.116019 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.116335 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.116367 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.120394 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.138575 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.158064 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.158834 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.165560 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.177423 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.189867 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.212790 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.212833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.212847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.212866 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.212879 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.213424 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.228341 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.243957 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.262483 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.285209 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.299393 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.312252 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.314740 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.314790 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.314809 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.314829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.314842 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.324582 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.335935 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.349682 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.360981 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.379093 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.417008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.417041 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.417050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.417070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.417081 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.422965 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.473338 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.508044 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.518968 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.519028 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.519041 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.519060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.519074 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.542015 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.585472 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.619989 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.621726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.621763 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.621776 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.621796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.621810 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.668955 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.705847 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.724589 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.724629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.724640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.724660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.724672 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.741546 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.763763 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:38:34.227678584 +0000 UTC Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.784419 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.824560 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.827713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.828111 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.828320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.828520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.828734 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.837508 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.837583 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:57 crc kubenswrapper[4771]: E0129 09:06:57.838108 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:57 crc kubenswrapper[4771]: E0129 09:06:57.838281 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.862887 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.903180 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.931794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.931848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.931867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.931891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:57 crc kubenswrapper[4771]: I0129 09:06:57.931917 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:57Z","lastTransitionTime":"2026-01-29T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.034451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.034510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.034521 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.034544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.034556 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.123820 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerStarted","Data":"4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.123918 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.137632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.137725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.137737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.137756 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.137770 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.144190 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.158640 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.184896 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.198869 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.211826 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.226603 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.241556 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.241614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.241626 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.241648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.241661 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.243662 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.266170 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.281186 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.300978 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.344421 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.344458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.344467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.344481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.344493 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.348016 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.390259 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.423862 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.446722 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.447081 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.447161 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.447246 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.447348 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.466319 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.503214 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:58Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.549838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.549888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.549899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.549919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.549933 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.653183 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.653270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.653285 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.653309 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.653329 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.756266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.756331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.756344 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.756363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.756375 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.764483 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:33:39.117752419 +0000 UTC Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.837254 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:58 crc kubenswrapper[4771]: E0129 09:06:58.837493 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.859202 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.859259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.859272 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.859293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.859307 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.962237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.962287 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.962297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.962314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:58 crc kubenswrapper[4771]: I0129 09:06:58.962326 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:58Z","lastTransitionTime":"2026-01-29T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.064842 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.065089 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.065097 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.065119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.065129 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.131577 4771 generic.go:334] "Generic (PLEG): container finished" podID="f580e781-76de-491c-a6e6-7295469d366a" containerID="4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe" exitCode=0 Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.131655 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerDied","Data":"4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.131798 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.151514 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.168315 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.168586 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.168633 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.168644 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.168660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.168671 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.195559 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.212436 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.236160 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.254138 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.268682 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.271846 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.271914 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.271926 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.271981 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.271994 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.290837 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.344286 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.363188 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.375248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.375281 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.375295 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.375317 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.375328 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.377395 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.395197 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.416008 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.430135 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.441126 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.441242 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.441271 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.441294 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.441317 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441435 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441476 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441498 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441512 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:15.441493642 +0000 UTC m=+55.564333859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441540 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441601 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:15.441583185 +0000 UTC m=+55.564423412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441619 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441657 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441759 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441761 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:15.441729909 +0000 UTC m=+55.564570126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441779 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.441919 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:15.441852692 +0000 UTC m=+55.564693109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.442115 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:07:15.442098839 +0000 UTC m=+55.564939226 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.442625 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.477548 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.477597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.477605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.477622 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.477632 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.580593 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.580625 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.580635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.580651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.580663 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.683068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.683111 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.683120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.683139 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.683148 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.765005 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:07:13.58826689 +0000 UTC Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.786541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.786599 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.786609 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.786633 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.786651 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.838008 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.838087 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.838260 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:06:59 crc kubenswrapper[4771]: E0129 09:06:59.838461 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.890260 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.890313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.890328 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.890353 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.890370 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.993226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.993275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.993284 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.993300 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:06:59 crc kubenswrapper[4771]: I0129 09:06:59.993313 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:06:59Z","lastTransitionTime":"2026-01-29T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.097008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.097410 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.097559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.097654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.097769 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.141101 4771 generic.go:334] "Generic (PLEG): container finished" podID="f580e781-76de-491c-a6e6-7295469d366a" containerID="d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9" exitCode=0 Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.141172 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerDied","Data":"d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.163000 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.179933 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.195454 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.202203 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.202526 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.202541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.202559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.202569 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.213343 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.233401 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.257593 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.273735 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.294982 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.306260 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.306317 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.306330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.306350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.306364 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.314836 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.329397 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.342150 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.354988 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.369590 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.382685 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.395570 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.473927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.473983 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.473996 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.474015 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.474026 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.576737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.576795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.576809 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.576833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.576849 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.679305 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.679365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.679378 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.679400 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.679416 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.766025 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:23:06.995577083 +0000 UTC Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.783216 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.783276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.783294 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.783316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.783330 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.837601 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:00 crc kubenswrapper[4771]: E0129 09:07:00.837791 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.858953 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.885481 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.886669 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.886760 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.886781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.886804 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.886817 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.905063 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.922002 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.937898 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.956763 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.972185 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.986944 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.989773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.989830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.989841 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.989859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:00 crc kubenswrapper[4771]: I0129 09:07:00.989871 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:00Z","lastTransitionTime":"2026-01-29T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.007594 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.020518 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.039156 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.068771 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.083707 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.092117 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.092161 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.092175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.092197 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.092210 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.105069 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.119549 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.146974 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/0.log" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.157969 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287" exitCode=1 Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.158144 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.160891 4771 scope.go:117] "RemoveContainer" containerID="f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.161389 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" event={"ID":"f580e781-76de-491c-a6e6-7295469d366a","Type":"ContainerStarted","Data":"b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.173676 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.192098 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.194564 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.194618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.194630 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.194674 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.194721 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.213745 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.230896 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.247223 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.262227 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.276740 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.291511 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.297477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.297526 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.297540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.297565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.297582 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.317247 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 09:07:01.014244 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 09:07:01.014259 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 09:07:01.014266 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 09:07:01.014318 5990 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 09:07:01.014329 5990 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 09:07:01.014351 5990 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 09:07:01.014361 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 09:07:01.014369 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 09:07:01.014378 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 09:07:01.014384 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 09:07:01.014392 5990 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 09:07:01.016867 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 09:07:01.016893 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 09:07:01.016917 5990 factory.go:656] Stopping watch factory\\\\nI0129 09:07:01.016935 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0129 09:07:01.016981 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.329683 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.340442 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.354112 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.371724 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.384610 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.395796 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.402103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.402157 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.402167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.402185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.402201 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.415892 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 09:07:01.014244 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 09:07:01.014259 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 09:07:01.014266 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 09:07:01.014318 5990 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 09:07:01.014329 5990 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 09:07:01.014351 5990 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 09:07:01.014361 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 09:07:01.014369 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 09:07:01.014378 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 09:07:01.014384 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 09:07:01.014392 5990 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 09:07:01.016867 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 09:07:01.016893 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 09:07:01.016917 5990 factory.go:656] Stopping watch factory\\\\nI0129 09:07:01.016935 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0129 09:07:01.016981 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.430682 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.448508 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.467057 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.481901 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.498427 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.505998 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.506058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.506079 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.506106 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.506123 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.514552 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.534546 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.577070 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.599817 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.609600 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.609667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.609676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.609721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.609738 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.621341 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.640327 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.660686 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.713374 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.713429 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.713441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.713462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.713474 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.721536 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.740919 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.766606 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:41:46.890665145 +0000 UTC Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.816504 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.816564 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.816574 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.816596 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.816609 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.837147 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.837198 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:01 crc kubenswrapper[4771]: E0129 09:07:01.837369 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:01 crc kubenswrapper[4771]: E0129 09:07:01.837555 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.919977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.920020 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.920030 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.920048 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:01 crc kubenswrapper[4771]: I0129 09:07:01.920058 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:01Z","lastTransitionTime":"2026-01-29T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.023090 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.023126 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.023136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.023164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.023181 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.126160 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.126209 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.126218 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.126239 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.126254 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.168274 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/1.log" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.168981 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/0.log" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.172246 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6" exitCode=1 Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.172308 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.172366 4771 scope.go:117] "RemoveContainer" containerID="f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.173321 4771 scope.go:117] "RemoveContainer" containerID="ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6" Jan 29 09:07:02 crc kubenswrapper[4771]: E0129 09:07:02.173670 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.187473 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.201137 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.218558 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.229380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.229449 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.229463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.229487 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.229502 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.234909 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.251241 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.268653 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.284863 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.304871 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.331526 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.332925 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.333041 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.333110 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.333185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.333272 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.347644 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.367234 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 09:07:01.014244 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 09:07:01.014259 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 09:07:01.014266 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 09:07:01.014318 5990 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 09:07:01.014329 5990 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 09:07:01.014351 5990 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 09:07:01.014361 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 09:07:01.014369 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 09:07:01.014378 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 09:07:01.014384 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 09:07:01.014392 5990 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 09:07:01.016867 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 09:07:01.016893 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 09:07:01.016917 5990 factory.go:656] Stopping watch factory\\\\nI0129 09:07:01.016935 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0129 09:07:01.016981 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.383384 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.399198 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.414129 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.428593 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.436812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.436862 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.436873 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.436892 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.436903 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.540097 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.540152 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.540168 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.540192 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.540205 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.638136 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9"] Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.639186 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.643802 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.643826 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.643943 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.643964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.643992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.644006 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.644957 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.664236 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.681830 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.694994 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.695137 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.695170 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5p5\" (UniqueName: \"kubernetes.io/projected/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-kube-api-access-wb5p5\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.695212 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.696800 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.715454 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.729816 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.745972 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.747291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.747327 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.747336 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.747354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.747368 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.763628 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.766827 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:27:39.146510656 +0000 UTC Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.780254 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.795981 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.796669 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5p5\" (UniqueName: \"kubernetes.io/projected/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-kube-api-access-wb5p5\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.796595 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.796833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.796899 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.797462 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.804237 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 09:07:01.014244 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 09:07:01.014259 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 09:07:01.014266 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 09:07:01.014318 5990 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 09:07:01.014329 5990 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 09:07:01.014351 5990 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 09:07:01.014361 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 09:07:01.014369 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 09:07:01.014378 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 09:07:01.014384 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 09:07:01.014392 5990 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 09:07:01.016867 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 09:07:01.016893 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 09:07:01.016917 5990 factory.go:656] Stopping watch factory\\\\nI0129 09:07:01.016935 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0129 09:07:01.016981 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.805276 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.814522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5p5\" (UniqueName: \"kubernetes.io/projected/5d61ce40-3dd1-4ed1-8c9d-e251d0af2987-kube-api-access-wb5p5\") pod \"ovnkube-control-plane-749d76644c-vg6n9\" (UID: \"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.822372 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.836681 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.837142 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:02 crc kubenswrapper[4771]: E0129 09:07:02.837274 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.850412 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.850472 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.850483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.850525 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.850538 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.851091 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.866031 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.879341 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.893293 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.908445 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:02Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.953458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.953511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.953522 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.953541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.953555 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:02Z","lastTransitionTime":"2026-01-29T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:02 crc kubenswrapper[4771]: I0129 09:07:02.962677 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" Jan 29 09:07:02 crc kubenswrapper[4771]: W0129 09:07:02.979513 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d61ce40_3dd1_4ed1_8c9d_e251d0af2987.slice/crio-de5e82ce79446bb163fc491bc94fb576c2a257efbe6053febcfdeb60399a8a15 WatchSource:0}: Error finding container de5e82ce79446bb163fc491bc94fb576c2a257efbe6053febcfdeb60399a8a15: Status 404 returned error can't find the container with id de5e82ce79446bb163fc491bc94fb576c2a257efbe6053febcfdeb60399a8a15 Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.056135 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.056178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.056187 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.056204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.056215 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.159553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.159605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.159615 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.159634 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.159645 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.179215 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/1.log" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.186290 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" event={"ID":"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987","Type":"ContainerStarted","Data":"de5e82ce79446bb163fc491bc94fb576c2a257efbe6053febcfdeb60399a8a15"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.262935 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.263007 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.263021 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.263045 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.263058 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.366959 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.367026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.367040 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.367064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.367078 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.393087 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lzs9r"] Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.393817 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:03 crc kubenswrapper[4771]: E0129 09:07:03.393918 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.406890 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.407020 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9rc\" (UniqueName: \"kubernetes.io/projected/938d1706-ae32-445f-b1b0-6cacad136ef8-kube-api-access-ts9rc\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.411603 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.424486 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.437154 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.450539 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.463896 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.469624 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.469665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.469676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.469717 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.469732 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.479236 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.497050 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.508446 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.508554 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9rc\" (UniqueName: \"kubernetes.io/projected/938d1706-ae32-445f-b1b0-6cacad136ef8-kube-api-access-ts9rc\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:03 crc kubenswrapper[4771]: E0129 09:07:03.508649 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:03 crc kubenswrapper[4771]: E0129 09:07:03.508780 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs podName:938d1706-ae32-445f-b1b0-6cacad136ef8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:04.008752931 +0000 UTC m=+44.131593158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs") pod "network-metrics-daemon-lzs9r" (UID: "938d1706-ae32-445f-b1b0-6cacad136ef8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.518109 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.534626 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9rc\" (UniqueName: \"kubernetes.io/projected/938d1706-ae32-445f-b1b0-6cacad136ef8-kube-api-access-ts9rc\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.544441 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.563566 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.572925 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.572993 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.573018 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.573046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.573065 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.579904 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.602689 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.622774 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.636908 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.652394 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.665411 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.676173 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.676225 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.676235 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.676253 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.676266 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.686657 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 09:07:01.014244 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 09:07:01.014259 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 09:07:01.014266 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 09:07:01.014318 5990 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 09:07:01.014329 5990 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 09:07:01.014351 5990 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 09:07:01.014361 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 09:07:01.014369 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 09:07:01.014378 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 09:07:01.014384 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 09:07:01.014392 5990 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 09:07:01.016867 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 09:07:01.016893 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 09:07:01.016917 5990 factory.go:656] Stopping watch factory\\\\nI0129 09:07:01.016935 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0129 09:07:01.016981 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.767197 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:51:48.259848805 +0000 UTC Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.779838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.779894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.779910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.779948 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.779959 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.837555 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.837572 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:03 crc kubenswrapper[4771]: E0129 09:07:03.837800 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:03 crc kubenswrapper[4771]: E0129 09:07:03.837896 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.882516 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.882572 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.882586 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.882610 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.882627 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.986854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.986951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.986996 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.987034 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:03 crc kubenswrapper[4771]: I0129 09:07:03.987059 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:03Z","lastTransitionTime":"2026-01-29T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.014650 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.014940 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.015029 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs podName:938d1706-ae32-445f-b1b0-6cacad136ef8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:05.01500523 +0000 UTC m=+45.137845467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs") pod "network-metrics-daemon-lzs9r" (UID: "938d1706-ae32-445f-b1b0-6cacad136ef8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.072465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.072519 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.072529 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.072549 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.072562 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.086724 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.092330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.092389 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.092402 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.092427 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.092438 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.110080 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.114180 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.114212 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.114220 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.114237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.114248 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.128564 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.134140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.134196 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.134208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.134228 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.134241 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.147935 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.152570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.152628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.152639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.152659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.152677 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.167948 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.168095 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.170291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.170352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.170365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.170388 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.170400 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.192091 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" event={"ID":"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987","Type":"ContainerStarted","Data":"c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.192177 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" event={"ID":"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987","Type":"ContainerStarted","Data":"2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.210265 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.226551 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.247834 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.264345 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.273739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.273796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.273839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.273861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.273875 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.278163 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.290513 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.307621 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.323588 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.349262 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 09:07:01.014244 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 09:07:01.014259 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 09:07:01.014266 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 09:07:01.014318 5990 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 09:07:01.014329 5990 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 09:07:01.014351 5990 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 09:07:01.014361 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 09:07:01.014369 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 09:07:01.014378 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 09:07:01.014384 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 09:07:01.014392 5990 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 09:07:01.016867 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 09:07:01.016893 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 09:07:01.016917 5990 factory.go:656] Stopping watch factory\\\\nI0129 09:07:01.016935 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0129 09:07:01.016981 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.363688 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.378277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.378331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.378341 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.378359 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.378477 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.378671 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.394783 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.409643 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.425879 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.441518 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.458406 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.470848 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:04Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.481883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.481936 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.481951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.481971 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.481989 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.585400 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.585452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.585467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.585484 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.585496 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.688396 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.688683 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.688838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.688877 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.688894 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.767442 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:26:27.377211923 +0000 UTC Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.792611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.792651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.792663 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.792684 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.792713 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.837376 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.837423 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.837570 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:04 crc kubenswrapper[4771]: E0129 09:07:04.837766 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.895155 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.895216 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.895230 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.895261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.895279 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.997848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.997901 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.997911 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.997930 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:04 crc kubenswrapper[4771]: I0129 09:07:04.997945 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:04Z","lastTransitionTime":"2026-01-29T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.027320 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:05 crc kubenswrapper[4771]: E0129 09:07:05.027568 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:05 crc kubenswrapper[4771]: E0129 09:07:05.027683 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs podName:938d1706-ae32-445f-b1b0-6cacad136ef8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:07.027658193 +0000 UTC m=+47.150498420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs") pod "network-metrics-daemon-lzs9r" (UID: "938d1706-ae32-445f-b1b0-6cacad136ef8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.101126 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.101922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.102032 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.102112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.102176 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.205363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.205423 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.205437 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.205459 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.205475 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.308473 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.308544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.308562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.308586 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.308601 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.412245 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.412323 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.412338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.412366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.412384 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.515441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.515503 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.515512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.515531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.515542 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.618393 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.618446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.618457 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.618476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.618489 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.720570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.720616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.720625 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.720641 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.720651 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.768203 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:37:40.285743136 +0000 UTC Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.823293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.823352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.823363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.823380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.823396 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.837548 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.837629 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:05 crc kubenswrapper[4771]: E0129 09:07:05.837735 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:05 crc kubenswrapper[4771]: E0129 09:07:05.837922 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.926358 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.926413 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.926426 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.926442 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:05 crc kubenswrapper[4771]: I0129 09:07:05.926453 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:05Z","lastTransitionTime":"2026-01-29T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.029190 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.029244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.029256 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.029282 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.029299 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.132463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.132526 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.132541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.132577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.132589 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.235746 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.235812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.235828 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.235846 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.235859 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.338577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.338619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.338635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.338656 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.338671 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.440793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.440828 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.440839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.440854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.440867 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.542970 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.543005 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.543015 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.543032 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.543043 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.645478 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.645565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.645576 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.645594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.645607 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.748234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.748297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.748309 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.748328 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.748340 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.768404 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:41:50.944601332 +0000 UTC Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.837921 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.837951 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:06 crc kubenswrapper[4771]: E0129 09:07:06.838145 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:06 crc kubenswrapper[4771]: E0129 09:07:06.838316 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.850312 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.850356 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.850365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.850381 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.850391 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.952829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.952887 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.952897 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.952916 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:06 crc kubenswrapper[4771]: I0129 09:07:06.952927 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:06Z","lastTransitionTime":"2026-01-29T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.049556 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:07 crc kubenswrapper[4771]: E0129 09:07:07.049779 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:07 crc kubenswrapper[4771]: E0129 09:07:07.049870 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs podName:938d1706-ae32-445f-b1b0-6cacad136ef8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:11.049847041 +0000 UTC m=+51.172687268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs") pod "network-metrics-daemon-lzs9r" (UID: "938d1706-ae32-445f-b1b0-6cacad136ef8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.056293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.056358 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.056377 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.056401 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.056417 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.159127 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.159175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.159188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.159207 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.159220 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.262483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.262541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.262552 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.262575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.262588 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.365492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.365537 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.365551 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.365570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.365584 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.468501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.468553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.468563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.468582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.468594 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.572277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.572331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.572341 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.572359 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.572371 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.675148 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.675207 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.675218 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.675241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.675257 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.769553 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:04:28.863104316 +0000 UTC Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.777496 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.777543 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.777552 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.777570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.777580 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.837907 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.837907 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:07 crc kubenswrapper[4771]: E0129 09:07:07.838165 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:07 crc kubenswrapper[4771]: E0129 09:07:07.838283 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.880442 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.880495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.880508 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.880527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.880537 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.984091 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.984154 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.984175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.984204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:07 crc kubenswrapper[4771]: I0129 09:07:07.984227 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:07Z","lastTransitionTime":"2026-01-29T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.086947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.086983 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.086994 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.087014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.087025 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.189788 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.189858 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.189873 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.189898 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.189916 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.292934 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.292992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.293006 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.293026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.293041 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.396364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.396419 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.396434 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.396456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.396470 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.499330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.499793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.499888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.499970 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.500088 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.603009 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.603057 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.603068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.603085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.603097 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.706500 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.707019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.707093 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.707175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.707248 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.769793 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:55:44.14841287 +0000 UTC Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.810572 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.811006 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.811122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.811270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.811359 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.837285 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.837340 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:08 crc kubenswrapper[4771]: E0129 09:07:08.837552 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:08 crc kubenswrapper[4771]: E0129 09:07:08.837924 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.915172 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.915248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.915262 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.915299 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:08 crc kubenswrapper[4771]: I0129 09:07:08.915314 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:08Z","lastTransitionTime":"2026-01-29T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.018472 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.018527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.018541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.018563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.018582 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.122127 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.122204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.122221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.122246 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.122262 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.225665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.225750 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.225762 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.225786 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.225800 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.328773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.328891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.328914 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.328950 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.328979 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.432373 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.432430 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.432443 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.432462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.432476 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.535780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.535830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.535843 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.535862 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.535872 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.639167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.639226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.639238 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.639261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.639277 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.743125 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.743200 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.743219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.743247 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.743266 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.770797 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:55:14.450746561 +0000 UTC Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.837376 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.837533 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:09 crc kubenswrapper[4771]: E0129 09:07:09.837577 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:09 crc kubenswrapper[4771]: E0129 09:07:09.837830 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.846155 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.846197 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.846209 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.846228 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.846240 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.948728 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.948780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.948791 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.948812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:09 crc kubenswrapper[4771]: I0129 09:07:09.948823 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:09Z","lastTransitionTime":"2026-01-29T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.051841 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.051882 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.051890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.051905 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.051915 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.154987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.155049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.155064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.155089 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.155103 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.258567 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.258614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.258623 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.258641 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.258650 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.362008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.362095 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.362119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.362151 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.362174 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.465378 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.465441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.465455 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.465480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.465495 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.569119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.569176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.569186 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.569206 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.569219 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.671450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.671486 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.671496 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.671510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.671519 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.771807 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:42:46.504202716 +0000 UTC Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.773941 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.773991 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.774009 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.774029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.774041 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.837339 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.837376 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:10 crc kubenswrapper[4771]: E0129 09:07:10.837552 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:10 crc kubenswrapper[4771]: E0129 09:07:10.837726 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.872770 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:10Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.877005 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.877166 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.877266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.877562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.877644 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.893134 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:10Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.912166 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:10Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.927744 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:10Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.944615 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:10Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.963324 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:10Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.980603 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.981041 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.981188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.980866 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:10Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.981331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.981511 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:10Z","lastTransitionTime":"2026-01-29T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:10 crc kubenswrapper[4771]: I0129 09:07:10.996511 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:10Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.022281 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82acfd4859e74b59e0a366b2cd4b8ec2d2d0cdde8f4bec0aceecad3f5485287\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"message\\\":\\\"emoval\\\\nI0129 09:07:01.014244 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 09:07:01.014259 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 09:07:01.014266 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 09:07:01.014318 5990 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 09:07:01.014329 5990 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 09:07:01.014351 5990 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 09:07:01.014361 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 09:07:01.014369 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 09:07:01.014378 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 09:07:01.014384 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 09:07:01.014392 5990 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 09:07:01.016867 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 09:07:01.016893 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 09:07:01.016917 5990 factory.go:656] Stopping watch factory\\\\nI0129 09:07:01.016935 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0129 09:07:01.016981 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.042226 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.069039 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.084712 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.084748 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.084759 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.084777 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.084789 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.092735 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.098125 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:11 crc kubenswrapper[4771]: E0129 09:07:11.098282 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:11 crc kubenswrapper[4771]: E0129 09:07:11.098357 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs podName:938d1706-ae32-445f-b1b0-6cacad136ef8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:19.098329281 +0000 UTC m=+59.221169508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs") pod "network-metrics-daemon-lzs9r" (UID: "938d1706-ae32-445f-b1b0-6cacad136ef8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.107665 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.120451 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.134763 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.147052 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.159541 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:11Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.187309 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.187355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.187366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.187385 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.187396 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.290473 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.290571 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.290591 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.290619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.290643 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.394286 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.394329 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.394339 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.394353 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.394363 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.496956 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.497016 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.497026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.497049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.497061 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.600391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.600464 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.600481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.600503 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.600517 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.703681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.703785 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.703801 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.703824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.703843 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.772263 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 21:27:17.868132978 +0000 UTC Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.805939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.806020 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.806031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.806046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.806057 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.837204 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.837228 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:11 crc kubenswrapper[4771]: E0129 09:07:11.837544 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:11 crc kubenswrapper[4771]: E0129 09:07:11.837605 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.909218 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.909269 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.909278 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.909295 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:11 crc kubenswrapper[4771]: I0129 09:07:11.909308 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:11Z","lastTransitionTime":"2026-01-29T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.012290 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.012359 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.012377 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.012404 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.012423 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.053940 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.055665 4771 scope.go:117] "RemoveContainer" containerID="ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6" Jan 29 09:07:12 crc kubenswrapper[4771]: E0129 09:07:12.056084 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.070406 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.085203 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.099549 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.112041 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.115082 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.115137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.115147 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.115176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.115188 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.133358 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.146104 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.159412 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.172490 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.188911 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.214689 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.218040 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.218102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.218115 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.218137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.218154 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.231089 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.247120 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.259586 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.273941 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.289589 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.306389 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.320683 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.320763 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.320780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.320801 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.320815 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.332681 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:12Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.423763 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.423835 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.423849 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.423874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.423892 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.527036 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.527089 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.527104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.527127 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.527140 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.630273 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.630332 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.630342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.630361 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.630377 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.733770 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.733874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.733904 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.733945 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.733973 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.772806 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:38:04.583772459 +0000 UTC Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.836963 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.837059 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:12 crc kubenswrapper[4771]: E0129 09:07:12.837146 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:12 crc kubenswrapper[4771]: E0129 09:07:12.837335 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.837653 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.837758 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.837775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.837806 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.837826 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.940864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.940932 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.940947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.940972 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:12 crc kubenswrapper[4771]: I0129 09:07:12.940987 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:12Z","lastTransitionTime":"2026-01-29T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.044230 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.044296 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.044309 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.044331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.044349 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.147494 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.147544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.147553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.147568 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.147577 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.213178 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.232163 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.235053 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.251140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.251182 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.251193 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.251219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.251247 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.257144 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.287226 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.301077 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.316592 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.336854 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.354968 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.355035 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.355052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.355076 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.355092 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.356352 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.370346 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.387277 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.403287 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.416466 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.437170 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.454194 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.458520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.458570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.458581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.458607 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.458628 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.472121 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.486773 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.498880 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.513414 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.562588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.563056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.563131 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.563228 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.563316 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.666921 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.667481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.667553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.667628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.667728 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.771600 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.772458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.772569 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.772718 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.772829 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.772947 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:08:46.728967992 +0000 UTC Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.837387 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.838006 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:13 crc kubenswrapper[4771]: E0129 09:07:13.838118 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:13 crc kubenswrapper[4771]: E0129 09:07:13.838422 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.876126 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.876168 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.876180 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.876199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.876214 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.979629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.979744 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.979775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.979805 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:13 crc kubenswrapper[4771]: I0129 09:07:13.979824 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:13Z","lastTransitionTime":"2026-01-29T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.083355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.083437 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.083452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.083470 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.083481 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.185877 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.185915 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.185927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.185945 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.185958 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.289032 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.289109 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.289127 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.289149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.289167 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.392239 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.392319 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.392347 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.392373 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.392389 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.459571 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.459635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.459647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.459663 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.459674 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: E0129 09:07:14.474735 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.479956 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.479987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.480000 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.480016 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.480028 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: E0129 09:07:14.496042 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.501199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.501242 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.501252 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.501270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.501282 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: E0129 09:07:14.514943 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.518738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.518787 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.518800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.518821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.518837 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: E0129 09:07:14.534055 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.537602 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.537635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.537645 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.537662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.537674 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: E0129 09:07:14.551090 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:14 crc kubenswrapper[4771]: E0129 09:07:14.551276 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.553163 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.553193 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.553203 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.553222 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.553235 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.656134 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.656169 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.656178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.656195 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.656206 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.760051 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.760445 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.760530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.760602 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.760662 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.773291 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:59:54.3718538 +0000 UTC Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.838039 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:14 crc kubenswrapper[4771]: E0129 09:07:14.838580 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.838925 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:14 crc kubenswrapper[4771]: E0129 09:07:14.839021 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.864208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.864548 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.864608 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.864676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.864784 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.967619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.967723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.967734 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.967753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:14 crc kubenswrapper[4771]: I0129 09:07:14.967765 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:14Z","lastTransitionTime":"2026-01-29T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.070568 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.070611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.070621 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.070639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.070650 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.174998 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.175087 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.175106 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.175132 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.175154 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.277860 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.277921 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.277934 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.277959 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.277975 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.381011 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.381081 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.381097 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.381125 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.381143 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.451096 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.451241 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451285 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:07:47.451256877 +0000 UTC m=+87.574097104 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.451353 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.451395 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451417 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451439 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451453 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451463 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451495 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:47.451487704 +0000 UTC m=+87.574327931 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451508 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:47.451503054 +0000 UTC m=+87.574343281 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.451423 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451602 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451615 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451625 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451630 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451669 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:47.451657708 +0000 UTC m=+87.574497935 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.451812 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:47.451783932 +0000 UTC m=+87.574624159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.483939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.483985 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.484001 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.484027 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.484046 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.587262 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.587345 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.587365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.587393 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.587415 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.690618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.690688 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.690724 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.690751 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.690769 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.773591 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:39:36.225173229 +0000 UTC Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.794461 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.794544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.794563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.794597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.794618 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.836937 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.837078 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.837162 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:15 crc kubenswrapper[4771]: E0129 09:07:15.837337 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.898796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.898855 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.898864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.898882 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:15 crc kubenswrapper[4771]: I0129 09:07:15.898894 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:15Z","lastTransitionTime":"2026-01-29T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.001877 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.001929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.001938 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.001958 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.001969 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.105194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.105259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.105276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.105300 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.105317 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.208567 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.208618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.208629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.208646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.208657 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.312556 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.312624 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.312640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.312661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.312674 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.415592 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.415654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.415666 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.415721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.415736 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.518598 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.518634 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.518643 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.518664 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.518685 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.622011 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.622058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.622071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.622090 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.622101 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.724901 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.724943 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.724957 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.724976 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.724987 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.773962 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:11:47.874803508 +0000 UTC Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.827646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.827713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.827729 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.827749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.827761 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.836955 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:16 crc kubenswrapper[4771]: E0129 09:07:16.837218 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.836992 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:16 crc kubenswrapper[4771]: E0129 09:07:16.837366 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.931588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.931645 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.931657 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.931675 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:16 crc kubenswrapper[4771]: I0129 09:07:16.931689 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:16Z","lastTransitionTime":"2026-01-29T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.034800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.034850 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.034862 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.034881 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.034892 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.138162 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.138540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.138642 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.138769 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.138858 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.241013 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.241093 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.241109 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.241136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.241150 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.344043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.344442 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.344532 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.344606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.344668 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.448524 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.448579 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.448593 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.448613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.448624 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.551503 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.551541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.551551 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.551568 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.551578 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.654905 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.655414 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.655487 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.655573 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.655654 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.758296 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.758338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.758348 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.758364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.758375 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.774638 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:52:53.209860725 +0000 UTC Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.837355 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:17 crc kubenswrapper[4771]: E0129 09:07:17.837569 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.837874 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:17 crc kubenswrapper[4771]: E0129 09:07:17.838000 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.861419 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.861464 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.861475 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.861493 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.861503 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.964249 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.964303 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.964313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.964333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:17 crc kubenswrapper[4771]: I0129 09:07:17.964344 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:17Z","lastTransitionTime":"2026-01-29T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.067186 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.067238 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.067253 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.067276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.067295 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.170932 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.170997 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.171014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.171050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.171068 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.274169 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.274216 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.274228 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.274248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.274261 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.376591 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.376640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.376652 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.376672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.376683 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.479642 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.479713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.479728 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.479748 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.479757 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.582502 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.582550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.582559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.582576 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.582588 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.684716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.684778 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.684793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.684816 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.684833 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.775233 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:26:03.213404959 +0000 UTC Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.787616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.787741 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.787763 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.787793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.787814 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.837409 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.837469 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:18 crc kubenswrapper[4771]: E0129 09:07:18.837624 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:18 crc kubenswrapper[4771]: E0129 09:07:18.837743 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.891567 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.891633 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.891644 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.891661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.891676 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.994368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.994442 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.994499 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.994537 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:18 crc kubenswrapper[4771]: I0129 09:07:18.994561 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:18Z","lastTransitionTime":"2026-01-29T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.097751 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.097829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.097865 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.097890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.097903 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.194114 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:19 crc kubenswrapper[4771]: E0129 09:07:19.194411 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:19 crc kubenswrapper[4771]: E0129 09:07:19.194539 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs podName:938d1706-ae32-445f-b1b0-6cacad136ef8 nodeName:}" failed. No retries permitted until 2026-01-29 09:07:35.194511899 +0000 UTC m=+75.317352126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs") pod "network-metrics-daemon-lzs9r" (UID: "938d1706-ae32-445f-b1b0-6cacad136ef8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.201836 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.201871 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.201880 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.201919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.201931 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.304950 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.305011 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.305019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.305039 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.305055 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.407673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.407773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.407788 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.407807 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.407819 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.511543 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.511634 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.511648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.511665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.511679 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.614869 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.614929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.614941 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.614963 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.614978 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.718094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.718501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.718575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.718659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.718751 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.776445 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:46:07.129976007 +0000 UTC Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.821368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.821422 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.821435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.821456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.821471 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.837069 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.837171 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:19 crc kubenswrapper[4771]: E0129 09:07:19.837232 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:19 crc kubenswrapper[4771]: E0129 09:07:19.837378 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.924646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.924717 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.924737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.924757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:19 crc kubenswrapper[4771]: I0129 09:07:19.924772 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:19Z","lastTransitionTime":"2026-01-29T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.027329 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.027391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.027403 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.027423 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.027437 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.131179 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.131221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.131231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.131248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.131259 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.234712 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.234791 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.234802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.234822 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.234834 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.337610 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.337651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.337661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.337677 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.337688 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.440871 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.440939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.440956 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.440975 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.440988 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.544029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.544078 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.544090 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.544108 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.544124 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.647045 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.647094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.647102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.647120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.647131 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.749140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.749510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.749617 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.749727 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.749884 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.777428 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:42:44.75901243 +0000 UTC Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.837953 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.838061 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:20 crc kubenswrapper[4771]: E0129 09:07:20.838244 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:20 crc kubenswrapper[4771]: E0129 09:07:20.838558 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.853663 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.854165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.854275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.854370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.854475 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.857508 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.875038 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.898280 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.910659 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.922462 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.935914 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.949486 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.957839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.957919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.957931 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.957956 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.957974 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:20Z","lastTransitionTime":"2026-01-29T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.962767 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.979244 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:20 crc kubenswrapper[4771]: I0129 09:07:20.993792 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.009105 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.023538 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.044977 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.060239 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.060284 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.060297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.060440 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.060456 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.061313 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.074985 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.090777 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.103660 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.119171 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:21Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.163044 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.163092 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.163100 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.163118 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.163130 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.265845 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.265895 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.265908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.265928 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.265944 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.369432 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.369490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.369501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.369519 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.369528 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.473231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.473333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.473343 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.473364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.473379 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.576031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.576089 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.576102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.576120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.576131 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.679055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.679112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.679122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.679141 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.679152 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.778118 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:31:21.797216507 +0000 UTC Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.782819 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.782876 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.782891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.782913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.782927 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.838754 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:21 crc kubenswrapper[4771]: E0129 09:07:21.838911 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.838912 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:21 crc kubenswrapper[4771]: E0129 09:07:21.839130 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.886432 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.886518 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.886536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.886598 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.886622 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.989784 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.989838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.989847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.989870 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:21 crc kubenswrapper[4771]: I0129 09:07:21.989883 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:21Z","lastTransitionTime":"2026-01-29T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.103368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.103681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.103804 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.103902 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.103971 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.209047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.209489 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.209571 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.209658 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.209752 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.313360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.314011 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.314108 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.314203 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.314318 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.417572 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.417631 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.417645 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.417664 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.417680 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.521074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.521128 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.521140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.521164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.521177 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.623452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.623495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.623507 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.623525 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.623537 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.727121 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.727180 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.727190 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.727209 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.727226 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.779789 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:50:26.275709746 +0000 UTC Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.830468 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.830526 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.830536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.830557 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.830569 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.837821 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.837894 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:22 crc kubenswrapper[4771]: E0129 09:07:22.837949 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:22 crc kubenswrapper[4771]: E0129 09:07:22.838028 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.838918 4771 scope.go:117] "RemoveContainer" containerID="ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.934022 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.934474 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.934492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.934515 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:22 crc kubenswrapper[4771]: I0129 09:07:22.934530 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:22Z","lastTransitionTime":"2026-01-29T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.037074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.037110 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.037120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.037137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.037147 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.140550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.140606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.140618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.140640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.140652 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.244780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.244834 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.244851 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.244875 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.244893 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.261719 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/1.log" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.272202 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.273265 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.286993 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.301355 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.315270 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.334641 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.353096 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.353350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.353490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.353509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.353533 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.353739 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.377602 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.393300 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.417031 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.437809 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.457071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.457113 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.457124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.457142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.457156 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.478848 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.505901 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.520416 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.530852 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.553309 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.559143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.559185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.559199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.559219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.559232 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.568542 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.581484 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.593639 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.613559 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.662103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.662173 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.662196 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.662221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.662239 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.770606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.770666 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.770675 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.770711 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.770731 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.780808 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:41:38.319492492 +0000 UTC Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.837438 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.837541 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:23 crc kubenswrapper[4771]: E0129 09:07:23.837633 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:23 crc kubenswrapper[4771]: E0129 09:07:23.837772 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.873235 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.873274 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.873285 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.873302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.873314 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.976428 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.976476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.976488 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.976505 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:23 crc kubenswrapper[4771]: I0129 09:07:23.976519 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:23Z","lastTransitionTime":"2026-01-29T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.079492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.079535 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.079560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.079582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.079596 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.182138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.182179 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.182189 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.182204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.182215 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.279917 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/2.log" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.281170 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/1.log" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.284278 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.284325 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.284337 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.284354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.284369 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.285311 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f" exitCode=1 Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.285375 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.285431 4771 scope.go:117] "RemoveContainer" containerID="ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.286817 4771 scope.go:117] "RemoveContainer" containerID="8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f" Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.287203 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.311101 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.326288 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.341089 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.358735 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.373463 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.387222 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.387524 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.387554 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.387565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.387584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.387595 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.401818 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.417292 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.431851 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.456664 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.476006 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.494523 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.494584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.494597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.494618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.494633 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.498188 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.513740 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.526038 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.546006 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.559221 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.573583 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.592205 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5cc06e10e9e393a5e9b1176f9e5ab5f5a32c75c4d11133dd27eb27f99a39a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"message\\\":\\\"LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120869 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 09:07:02.120875 6198 services_controller.go:453] Built service openshift-console-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 09:07:02.120879 6198 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 09:07:02.120883 6198 services_controller.go:454] Service openshift-console-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 09:07:02.120896 6198 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"ebd4748e-0473-49fb-88ad-83dbb221791a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.596760 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.596796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.596805 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.596820 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.596830 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.662559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.662611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.662623 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.662642 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.662655 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.677669 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.681771 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.681831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.681844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.681861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.681872 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.696196 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.701071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.701097 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.701108 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.701122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.701158 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.714025 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.718614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.718666 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.718678 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.718722 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.718735 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.732900 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.737458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.737497 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.737505 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.737520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.737529 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.750117 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.750232 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.752416 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.752474 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.752487 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.752508 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.752520 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.781791 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:17:04.335484261 +0000 UTC Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.837449 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.837467 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.837686 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:24 crc kubenswrapper[4771]: E0129 09:07:24.837824 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.854837 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.854871 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.854879 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.854893 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.854903 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.957796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.957834 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.957843 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.957859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:24 crc kubenswrapper[4771]: I0129 09:07:24.957871 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:24Z","lastTransitionTime":"2026-01-29T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.061045 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.061090 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.061104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.061123 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.061137 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.164711 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.164777 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.164794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.164818 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.164833 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.267924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.268021 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.268033 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.268055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.268070 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.291343 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/2.log" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.295938 4771 scope.go:117] "RemoveContainer" containerID="8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f" Jan 29 09:07:25 crc kubenswrapper[4771]: E0129 09:07:25.296168 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.308763 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.321946 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.333779 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.347992 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.365111 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.375119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.375203 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.375220 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.375246 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.375264 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.383425 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.398531 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.414196 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.428562 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.451840 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.466450 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.477676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.477730 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.477742 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.477761 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.477773 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.480264 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.497582 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.511281 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.527312 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.542141 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.557524 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.579839 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:25Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.580506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.580549 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.580560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.580579 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.580592 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.683794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.683900 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.683919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.683942 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.683956 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.782312 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:35:31.043501358 +0000 UTC Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.786492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.786534 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.786547 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.786565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.786576 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.837731 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.837809 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:25 crc kubenswrapper[4771]: E0129 09:07:25.837944 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:25 crc kubenswrapper[4771]: E0129 09:07:25.838149 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.889440 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.889521 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.889533 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.889553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.889563 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.992862 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.992917 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.992929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.992947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:25 crc kubenswrapper[4771]: I0129 09:07:25.992960 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:25Z","lastTransitionTime":"2026-01-29T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.096247 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.096288 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.096296 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.096314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.096324 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.199020 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.199124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.199142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.199162 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.199202 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.301882 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.301929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.301940 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.301962 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.301975 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.404960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.405028 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.405043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.405066 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.405079 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.507584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.507640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.507654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.507672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.507687 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.611110 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.611163 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.611173 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.611193 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.611208 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.713973 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.714019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.714030 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.714049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.714063 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.783087 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:54:18.936357233 +0000 UTC Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.817305 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.817376 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.817390 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.817412 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.817426 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.836903 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.836903 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:26 crc kubenswrapper[4771]: E0129 09:07:26.837070 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:26 crc kubenswrapper[4771]: E0129 09:07:26.837296 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.920436 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.920796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.920867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.920970 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:26 crc kubenswrapper[4771]: I0129 09:07:26.921072 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:26Z","lastTransitionTime":"2026-01-29T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.023157 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.023390 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.023516 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.023591 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.023655 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.126517 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.126804 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.126908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.127043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.127134 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.229946 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.230284 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.230375 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.230459 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.230533 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.338399 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.338458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.338481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.338506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.338523 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.441102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.441175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.441188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.441212 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.441227 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.543772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.543829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.543840 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.543859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.543870 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.647065 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.647133 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.647145 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.647165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.647177 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.749396 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.749446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.749456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.749478 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.749491 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.783762 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:07:42.005678861 +0000 UTC Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.837618 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.837836 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:27 crc kubenswrapper[4771]: E0129 09:07:27.838034 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:27 crc kubenswrapper[4771]: E0129 09:07:27.838489 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.851645 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.853021 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.853086 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.853099 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.853119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.853133 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.956082 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.956144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.956158 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.956181 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:27 crc kubenswrapper[4771]: I0129 09:07:27.956198 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:27Z","lastTransitionTime":"2026-01-29T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.059194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.059248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.059259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.059275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.059285 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.161780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.162082 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.162163 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.162252 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.162318 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.265305 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.265350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.265363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.265382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.265393 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.368147 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.368191 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.368201 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.368219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.368229 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.470436 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.470473 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.470481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.470495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.470505 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.572676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.572935 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.572997 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.573119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.573197 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.676314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.676575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.676679 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.676802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.676869 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.780686 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.780756 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.780768 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.780784 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.780794 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.784235 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:06:18.622185081 +0000 UTC Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.837748 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.837748 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:28 crc kubenswrapper[4771]: E0129 09:07:28.837926 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:28 crc kubenswrapper[4771]: E0129 09:07:28.837962 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.883636 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.883920 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.883984 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.884097 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.884157 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.986767 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.986802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.986810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.986824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:28 crc kubenswrapper[4771]: I0129 09:07:28.986833 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:28Z","lastTransitionTime":"2026-01-29T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.089807 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.090054 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.090135 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.090227 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.090306 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.192929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.192969 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.192977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.192992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.193002 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.295306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.295556 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.295624 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.295808 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.295881 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.398603 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.398660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.398672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.398690 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.398718 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.501748 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.501896 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.501909 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.501931 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.501943 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.605099 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.605193 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.605207 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.605228 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.605245 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.708824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.709186 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.709268 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.709373 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.709476 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.784787 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:44:35.467893144 +0000 UTC Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.811989 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.812268 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.812394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.812494 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.812584 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.837793 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.837793 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:29 crc kubenswrapper[4771]: E0129 09:07:29.838393 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:29 crc kubenswrapper[4771]: E0129 09:07:29.838560 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.915257 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.915302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.915312 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.915331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:29 crc kubenswrapper[4771]: I0129 09:07:29.915343 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:29Z","lastTransitionTime":"2026-01-29T09:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.021883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.022324 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.022336 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.022554 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.022632 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.125576 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.125627 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.125639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.125661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.125675 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.229156 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.229218 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.229236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.229260 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.229275 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.332395 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.332436 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.332446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.332460 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.332492 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.434980 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.435025 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.435037 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.435060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.435077 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.538476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.538524 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.538537 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.538557 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.538571 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.640993 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.641040 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.641054 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.641075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.641091 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.744355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.744405 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.744415 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.744430 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.744442 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.785821 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:59:42.541433591 +0000 UTC Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.837620 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.837638 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:30 crc kubenswrapper[4771]: E0129 09:07:30.837854 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:30 crc kubenswrapper[4771]: E0129 09:07:30.837926 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.846364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.846402 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.846411 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.846427 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.846441 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.852557 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.865641 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.878640 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.894180 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.913969 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.929211 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.944843 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.952183 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.952428 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.952590 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.952768 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.952906 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:30Z","lastTransitionTime":"2026-01-29T09:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.956883 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.973398 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:30 crc kubenswrapper[4771]: I0129 09:07:30.995044 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.010543 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.023278 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.036678 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.050259 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.055374 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.055416 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.055432 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.055452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.055467 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.065527 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.082114 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.094274 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.105797 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.132141 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:31Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.158338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.158374 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.158385 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.158408 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.158422 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.260904 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.260974 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.260995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.261021 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.261041 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.363778 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.363846 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.363876 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.363894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.363906 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.467015 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.467056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.467068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.467085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.467095 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.570385 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.570452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.570462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.570483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.570496 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.673518 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.673574 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.673584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.673606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.673619 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.776809 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.776876 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.776893 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.776917 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.776932 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.786987 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:59:53.476889117 +0000 UTC Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.837571 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.837613 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:31 crc kubenswrapper[4771]: E0129 09:07:31.837733 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:31 crc kubenswrapper[4771]: E0129 09:07:31.837861 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.880198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.880240 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.880251 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.880271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.880280 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.983171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.983244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.983259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.983287 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:31 crc kubenswrapper[4771]: I0129 09:07:31.983304 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:31Z","lastTransitionTime":"2026-01-29T09:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.086258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.086321 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.086330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.086349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.086360 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.190480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.190546 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.190560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.190584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.190598 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.293795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.293843 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.293858 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.293876 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.293920 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.396990 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.397046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.397060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.397079 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.397089 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.499937 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.500003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.500015 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.500031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.500041 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.602908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.602952 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.602961 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.602979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.602993 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.706249 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.706296 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.706309 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.706330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.706341 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.787942 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:54:06.880004013 +0000 UTC Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.809428 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.809470 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.809479 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.809493 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.809504 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.838133 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.838250 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:32 crc kubenswrapper[4771]: E0129 09:07:32.838341 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:32 crc kubenswrapper[4771]: E0129 09:07:32.838539 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.912514 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.912575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.912587 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.912607 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:32 crc kubenswrapper[4771]: I0129 09:07:32.912620 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:32Z","lastTransitionTime":"2026-01-29T09:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.016175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.016235 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.016245 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.016265 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.016278 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.119159 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.119215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.119231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.119249 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.119260 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.222913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.222978 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.222988 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.223010 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.223024 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.326561 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.326601 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.326609 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.326622 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.326631 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.428688 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.428758 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.428769 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.428785 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.428795 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.531872 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.531930 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.531987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.532011 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.532027 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.635173 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.635214 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.635226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.635247 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.635263 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.739858 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.739950 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.739969 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.739995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.740058 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.788964 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:42:42.422185293 +0000 UTC Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.837577 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.837740 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:33 crc kubenswrapper[4771]: E0129 09:07:33.837818 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:33 crc kubenswrapper[4771]: E0129 09:07:33.837922 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.843131 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.843184 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.843203 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.843227 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.843253 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.946675 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.946765 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.946778 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.946800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:33 crc kubenswrapper[4771]: I0129 09:07:33.946814 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:33Z","lastTransitionTime":"2026-01-29T09:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.050017 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.050071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.050085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.050103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.050116 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.153291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.153352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.153364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.153385 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.153421 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.256470 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.256542 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.256554 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.256574 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.256588 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.358913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.358958 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.358975 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.358993 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.359005 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.461402 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.461435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.461445 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.461461 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.461482 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.563582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.563636 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.563647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.563664 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.563677 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.666360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.666412 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.666424 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.666444 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.666456 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.768460 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.768502 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.768511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.768531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.768540 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.789210 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:34:33.704675142 +0000 UTC Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.837792 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.837983 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:34 crc kubenswrapper[4771]: E0129 09:07:34.838159 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:34 crc kubenswrapper[4771]: E0129 09:07:34.838305 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.871770 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.871820 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.871829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.871848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.871860 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.959743 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.959787 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.959796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.959810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.959819 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: E0129 09:07:34.977420 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.982409 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.982463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.982476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.982498 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:34 crc kubenswrapper[4771]: I0129 09:07:34.982513 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:34Z","lastTransitionTime":"2026-01-29T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:34 crc kubenswrapper[4771]: E0129 09:07:34.996133 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:34Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.000863 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.000940 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.000977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.000999 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.001015 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.015978 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.020596 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.020659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.020675 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.020738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.020767 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.037968 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.042360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.042404 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.042414 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.042429 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.042440 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.056215 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:35Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.056428 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.058407 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.058431 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.058440 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.058454 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.058464 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.161354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.161409 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.161420 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.161437 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.161448 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.264608 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.264667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.264680 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.264722 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.264744 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.284237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.284433 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.284546 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs podName:938d1706-ae32-445f-b1b0-6cacad136ef8 nodeName:}" failed. No retries permitted until 2026-01-29 09:08:07.284516614 +0000 UTC m=+107.407356851 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs") pod "network-metrics-daemon-lzs9r" (UID: "938d1706-ae32-445f-b1b0-6cacad136ef8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.368243 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.368320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.368334 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.368356 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.368379 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.471485 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.471577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.471597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.471616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.471627 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.574483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.574535 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.574545 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.574566 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.574581 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.677614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.677690 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.677730 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.677759 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.677777 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.781373 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.781426 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.781440 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.781465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.781480 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.789578 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:23:13.927304829 +0000 UTC Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.837781 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.837860 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.837962 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.838542 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.839217 4771 scope.go:117] "RemoveContainer" containerID="8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f" Jan 29 09:07:35 crc kubenswrapper[4771]: E0129 09:07:35.839673 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.883521 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.883553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.883562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.883583 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.883592 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.986462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.986540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.986550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.986570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:35 crc kubenswrapper[4771]: I0129 09:07:35.986582 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:35Z","lastTransitionTime":"2026-01-29T09:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.089397 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.089448 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.089465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.089483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.089497 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.192083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.192134 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.192144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.192162 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.192173 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.295270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.295317 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.295326 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.295346 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.295361 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.398185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.398240 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.398257 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.398279 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.398295 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.500342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.500398 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.500413 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.500431 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.500443 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.603113 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.603191 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.603205 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.603223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.603235 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.706476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.706528 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.706541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.706560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.706573 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.790258 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:08:50.342795253 +0000 UTC Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.810178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.810240 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.810250 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.810271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.810293 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.837957 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.837975 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:36 crc kubenswrapper[4771]: E0129 09:07:36.838138 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:36 crc kubenswrapper[4771]: E0129 09:07:36.838223 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.913242 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.913307 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.913325 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.913349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:36 crc kubenswrapper[4771]: I0129 09:07:36.913368 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:36Z","lastTransitionTime":"2026-01-29T09:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.015812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.015844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.015853 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.015867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.015878 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.118431 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.118483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.118496 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.118581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.118596 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.220829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.220865 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.220873 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.220888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.220897 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.323569 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.323619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.323629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.323647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.323658 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.427014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.427096 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.427110 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.427131 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.427145 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.529810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.529862 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.529872 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.529886 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.529898 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.632245 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.632280 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.632288 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.632302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.632311 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.735967 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.736031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.736047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.736070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.736084 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.791154 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:23:16.678987759 +0000 UTC Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.837516 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.837522 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:37 crc kubenswrapper[4771]: E0129 09:07:37.837741 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:37 crc kubenswrapper[4771]: E0129 09:07:37.837851 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.840804 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.840953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.840999 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.841025 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.841044 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.944058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.944112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.944121 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.944140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:37 crc kubenswrapper[4771]: I0129 09:07:37.944151 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:37Z","lastTransitionTime":"2026-01-29T09:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.047417 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.047476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.047490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.047510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.047523 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.150547 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.150646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.150658 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.150676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.150708 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.253143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.253501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.253588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.253675 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.253802 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.356596 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.356851 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.356969 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.357062 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.357139 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.460067 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.460282 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.460358 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.460482 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.460541 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.563200 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.563255 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.563268 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.563286 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.563298 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.665737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.665790 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.665811 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.665829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.665840 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.768732 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.769008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.769093 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.769193 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.769267 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.792073 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:35:42.63537945 +0000 UTC Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.838053 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.838086 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:38 crc kubenswrapper[4771]: E0129 09:07:38.838313 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:38 crc kubenswrapper[4771]: E0129 09:07:38.838324 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.871516 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.871568 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.871580 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.871596 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.871607 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.974441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.974478 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.974490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.974535 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:38 crc kubenswrapper[4771]: I0129 09:07:38.974547 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:38Z","lastTransitionTime":"2026-01-29T09:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.077573 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.077635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.077646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.077665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.077678 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.180586 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.180631 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.180645 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.180665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.180677 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.283175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.283215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.283224 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.283240 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.283250 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.385649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.385733 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.385749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.385799 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.385815 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.488688 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.488751 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.488762 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.488781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.488792 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.590759 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.590986 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.591181 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.591260 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.591335 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.693303 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.693588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.693665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.693805 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.693891 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.793228 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:31:58.2589089 +0000 UTC Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.797137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.797178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.797195 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.797211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.797224 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.837665 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.837738 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:39 crc kubenswrapper[4771]: E0129 09:07:39.837859 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:39 crc kubenswrapper[4771]: E0129 09:07:39.837956 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.899919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.899990 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.900003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.900022 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:39 crc kubenswrapper[4771]: I0129 09:07:39.900037 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:39Z","lastTransitionTime":"2026-01-29T09:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.002913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.002958 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.002967 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.002985 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.003024 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.105149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.105204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.105241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.105262 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.105275 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.207953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.207989 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.207997 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.208011 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.208022 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.310821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.311050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.311150 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.311224 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.311284 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.349540 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/0.log" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.349606 4771 generic.go:334] "Generic (PLEG): container finished" podID="a46c7969-6ce3-4ba5-a1ab-73bbf487ae73" containerID="1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c" exitCode=1 Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.349648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cfc8z" event={"ID":"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73","Type":"ContainerDied","Data":"1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.350193 4771 scope.go:117] "RemoveContainer" containerID="1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.375458 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.393849 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.429039 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.430400 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.430420 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.430428 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.430440 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.430449 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.452888 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.467845 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.484321 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.499204 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.511678 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.529263 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.535960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.536005 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.536014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.536029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.536040 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.543719 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.554016 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.564503 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.577064 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.591124 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.605660 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.619069 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.633807 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:39Z\\\",\\\"message\\\":\\\"2026-01-29T09:06:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125\\\\n2026-01-29T09:06:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125 to /host/opt/cni/bin/\\\\n2026-01-29T09:06:54Z [verbose] multus-daemon started\\\\n2026-01-29T09:06:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T09:07:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.637817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.637845 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.637859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.637875 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.637886 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.645913 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.656833 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.740600 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.740633 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.740642 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.740655 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.740665 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.794434 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:16:46.653543909 +0000 UTC Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.837909 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.838012 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:40 crc kubenswrapper[4771]: E0129 09:07:40.838161 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:40 crc kubenswrapper[4771]: E0129 09:07:40.838257 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.842782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.842819 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.842828 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.842842 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.842852 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.849535 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.860785 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.873738 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.886187 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.897725 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:39Z\\\",\\\"message\\\":\\\"2026-01-29T09:06:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125\\\\n2026-01-29T09:06:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125 to /host/opt/cni/bin/\\\\n2026-01-29T09:06:54Z [verbose] multus-daemon started\\\\n2026-01-29T09:06:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T09:07:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.907709 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.918239 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.935349 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.945750 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.945802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.945817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.945840 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.945852 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:40Z","lastTransitionTime":"2026-01-29T09:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.949336 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.960114 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.971786 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.982654 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:40 crc kubenswrapper[4771]: I0129 09:07:40.998254 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:40Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.009768 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.023758 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.043519 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.047818 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.047851 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.047862 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.047880 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.047892 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.056821 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.070321 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.084306 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.150248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.150277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.150287 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.150302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.150313 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.253004 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.253046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.253056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.253069 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.253078 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.355785 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.355865 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.355895 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.355919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.355958 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.356632 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/0.log" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.356723 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cfc8z" event={"ID":"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73","Type":"ContainerStarted","Data":"2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.374107 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.386314 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.408574 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.421137 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.433466 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.445220 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.458596 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:39Z\\\",\\\"message\\\":\\\"2026-01-29T09:06:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125\\\\n2026-01-29T09:06:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125 to /host/opt/cni/bin/\\\\n2026-01-29T09:06:54Z [verbose] multus-daemon started\\\\n2026-01-29T09:06:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T09:07:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.458959 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.458981 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.458989 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.459005 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.459015 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.472003 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.482951 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.496155 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.509204 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.521459 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.534228 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.546790 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.560971 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.561012 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.561024 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.561043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.561055 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.561865 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.579743 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.592909 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.604153 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.617113 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:41Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.663626 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.663676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.663687 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.663729 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.663743 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.766342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.766372 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.766381 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.766393 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.766403 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.795394 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:54:18.363679007 +0000 UTC Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.837569 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.837656 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:41 crc kubenswrapper[4771]: E0129 09:07:41.837776 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:41 crc kubenswrapper[4771]: E0129 09:07:41.837964 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.874175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.874234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.874246 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.874299 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.874321 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.977427 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.977474 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.977490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.977513 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:41 crc kubenswrapper[4771]: I0129 09:07:41.977530 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:41Z","lastTransitionTime":"2026-01-29T09:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.081112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.081200 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.081214 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.081236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.081250 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.184197 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.184283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.184306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.184335 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.184357 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.286538 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.286611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.286627 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.286644 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.286654 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.389410 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.389449 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.389458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.389471 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.389480 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.492433 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.492478 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.492490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.492506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.492518 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.594944 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.594992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.595004 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.595023 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.595035 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.698236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.698290 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.698302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.698320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.698331 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.796395 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:27:52.435956373 +0000 UTC Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.800231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.800277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.800284 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.800298 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.800308 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.837788 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.837828 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:42 crc kubenswrapper[4771]: E0129 09:07:42.837956 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:42 crc kubenswrapper[4771]: E0129 09:07:42.838081 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.902286 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.902365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.902381 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.902400 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:42 crc kubenswrapper[4771]: I0129 09:07:42.902410 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:42Z","lastTransitionTime":"2026-01-29T09:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.005486 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.005531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.005540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.005557 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.005568 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.108122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.108179 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.108192 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.108210 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.108225 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.210304 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.210379 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.210391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.210435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.210450 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.313506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.313550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.313558 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.313576 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.313586 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.416540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.416625 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.416646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.416667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.416681 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.519202 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.519242 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.519255 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.519280 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.519295 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.621528 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.621582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.621604 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.621629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.621646 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.724267 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.724322 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.724331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.724353 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.724363 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.796886 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:43:41.919838402 +0000 UTC Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.827007 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.827070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.827082 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.827098 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.827108 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.837671 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:43 crc kubenswrapper[4771]: E0129 09:07:43.837855 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.837730 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:43 crc kubenswrapper[4771]: E0129 09:07:43.838018 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.928977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.929023 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.929038 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.929062 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:43 crc kubenswrapper[4771]: I0129 09:07:43.929078 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:43Z","lastTransitionTime":"2026-01-29T09:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.032155 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.032189 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.032198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.032211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.032220 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.135044 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.135108 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.135120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.135135 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.135146 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.238053 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.238094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.238104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.238119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.238129 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.341236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.341306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.341318 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.341339 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.341353 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.444119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.444160 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.444171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.444189 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.444201 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.547857 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.547924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.547944 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.547969 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.547987 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.666434 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.666477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.666487 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.666510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.666521 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.769854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.769883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.769892 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.769905 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.769914 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.797583 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:39:21.356000487 +0000 UTC Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.837444 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.837554 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:44 crc kubenswrapper[4771]: E0129 09:07:44.837660 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:44 crc kubenswrapper[4771]: E0129 09:07:44.837759 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.872273 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.872323 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.872334 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.872356 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.872369 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.975713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.975775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.975788 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.975810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:44 crc kubenswrapper[4771]: I0129 09:07:44.975825 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:44Z","lastTransitionTime":"2026-01-29T09:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.078680 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.078743 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.078754 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.078775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.078789 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.181068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.181109 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.181117 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.181133 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.181142 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.283812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.283877 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.283891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.283910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.283923 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.292263 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.292327 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.292339 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.292363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.292376 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: E0129 09:07:45.308499 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.313590 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.313632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.313648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.313669 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.313681 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: E0129 09:07:45.329200 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.334420 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.334495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.334507 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.334536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.334547 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: E0129 09:07:45.349139 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.354324 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.354570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.354586 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.354605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.354617 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: E0129 09:07:45.367884 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.371647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.371721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.371739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.371757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.371769 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: E0129 09:07:45.385396 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:45Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:45 crc kubenswrapper[4771]: E0129 09:07:45.385528 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.387490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.387562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.387581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.387602 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.387618 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.490974 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.491036 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.491046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.491065 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.491077 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.594279 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.594331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.594342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.594363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.594376 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.697293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.697337 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.697349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.697368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.697382 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.798584 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:33:25.219301692 +0000 UTC Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.799978 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.800024 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.800036 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.800054 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.800064 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.837340 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:45 crc kubenswrapper[4771]: E0129 09:07:45.837488 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.837738 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:45 crc kubenswrapper[4771]: E0129 09:07:45.837955 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.902260 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.902293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.902301 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.902315 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:45 crc kubenswrapper[4771]: I0129 09:07:45.902324 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:45Z","lastTransitionTime":"2026-01-29T09:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.004948 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.004990 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.005000 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.005018 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.005030 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.107289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.107377 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.107390 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.107410 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.107424 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.211045 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.211098 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.211112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.211132 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.211145 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.313765 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.313800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.313809 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.313821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.313830 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.417535 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.417605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.417623 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.417648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.417667 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.520772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.521083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.521175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.521258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.521315 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.624464 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.624855 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.624953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.625103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.625199 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.728303 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.728350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.728362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.728380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.728391 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.799317 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:20:41.824505989 +0000 UTC Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.833569 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.833676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.833724 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.833747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.833759 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.836919 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.837254 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:46 crc kubenswrapper[4771]: E0129 09:07:46.837402 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:46 crc kubenswrapper[4771]: E0129 09:07:46.837735 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.937482 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.937577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.937594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.937617 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:46 crc kubenswrapper[4771]: I0129 09:07:46.937636 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:46Z","lastTransitionTime":"2026-01-29T09:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.040305 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.040341 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.040350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.040362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.040370 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.142723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.142782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.142808 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.142824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.142836 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.245336 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.245399 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.245411 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.245434 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.245444 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.348398 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.348443 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.348457 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.348477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.348491 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.450777 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.450823 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.450837 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.450853 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.450864 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.520976 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.521097 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.521132 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.521171 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.521195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521372 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521398 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521415 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521471 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:08:51.521453909 +0000 UTC m=+151.644294136 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521666 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:08:51.521658124 +0000 UTC m=+151.644498341 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521733 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521760 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:08:51.521754207 +0000 UTC m=+151.644594434 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521895 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521924 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521940 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.521974 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:08:51.521964203 +0000 UTC m=+151.644804430 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.522126 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.522304 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:08:51.522272032 +0000 UTC m=+151.645112399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.554307 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.554366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.554376 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.554395 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.554411 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.657579 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.657616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.657627 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.657648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.657662 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.761114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.761206 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.761217 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.761241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.761255 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.800328 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:02:43.098594534 +0000 UTC Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.837893 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.837941 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.838332 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:47 crc kubenswrapper[4771]: E0129 09:07:47.838446 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.864086 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.864144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.864158 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.864178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.864188 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.966952 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.967012 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.967022 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.967040 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:47 crc kubenswrapper[4771]: I0129 09:07:47.967052 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:47Z","lastTransitionTime":"2026-01-29T09:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.069271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.069340 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.069353 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.069371 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.069382 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.172291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.172335 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.172345 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.172363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.172373 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.275724 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.275805 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.275817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.275854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.275868 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.379254 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.379304 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.379319 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.379342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.379355 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.483596 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.483660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.483679 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.483745 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.483764 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.586678 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.586747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.586756 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.586775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.586786 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.689409 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.689449 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.689457 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.689472 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.689482 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.792942 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.793435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.793607 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.793750 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.793836 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.800875 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:05:36.86824409 +0000 UTC Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.837474 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.837582 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:48 crc kubenswrapper[4771]: E0129 09:07:48.838054 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:48 crc kubenswrapper[4771]: E0129 09:07:48.838145 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.838224 4771 scope.go:117] "RemoveContainer" containerID="8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.897829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.898330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.898340 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.898359 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:48 crc kubenswrapper[4771]: I0129 09:07:48.898371 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:48Z","lastTransitionTime":"2026-01-29T09:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.001426 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.001479 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.001520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.001594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.001614 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.105298 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.105340 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.105349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.105380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.105391 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.208164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.208204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.208216 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.208236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.208248 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.312071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.312117 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.312126 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.312141 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.312150 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.388961 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/2.log" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.392143 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.392789 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.408534 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.414894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.414944 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.414957 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.414979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.414993 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.426025 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.461002 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.519460 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.519525 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.519539 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.519568 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.519661 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.524705 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.542981 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.561832 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.579873 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:39Z\\\",\\\"message\\\":\\\"2026-01-29T09:06:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125\\\\n2026-01-29T09:06:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125 to /host/opt/cni/bin/\\\\n2026-01-29T09:06:54Z [verbose] multus-daemon started\\\\n2026-01-29T09:06:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T09:07:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.606059 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.622243 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.622290 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.622301 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.622322 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.622336 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.625387 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.641129 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.656408 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.670234 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.684609 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.701215 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.723164 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.725713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.725792 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.725995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.726019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.726034 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.750003 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.768331 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.785380 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.801171 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:02:48.794476134 +0000 UTC Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.804783 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:49Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.828809 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.828860 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.828872 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.828890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.828903 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.837130 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:49 crc kubenswrapper[4771]: E0129 09:07:49.837272 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.837570 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:49 crc kubenswrapper[4771]: E0129 09:07:49.837792 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.933013 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.933072 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.933083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.933102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:49 crc kubenswrapper[4771]: I0129 09:07:49.933119 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:49Z","lastTransitionTime":"2026-01-29T09:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.037325 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.037369 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.037381 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.037400 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.037411 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.140221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.140265 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.140278 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.140297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.140310 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.243451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.243514 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.243527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.243545 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.243561 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.346421 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.346495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.346519 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.346550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.346578 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.397219 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/3.log" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.398117 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/2.log" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.400583 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" exitCode=1 Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.400623 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.400662 4771 scope.go:117] "RemoveContainer" containerID="8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.402122 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:07:50 crc kubenswrapper[4771]: E0129 09:07:50.402305 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.423425 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.439604 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.449564 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.449637 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.449654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.449673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.449683 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.454508 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.470463 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.486015 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.502299 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.515182 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.529388 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.549390 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:50Z\\\",\\\"message\\\":\\\"ervice k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 09:07:50.011791 6765 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 09:07:50.012626 6765 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 09:07:50.011629 6765 services_controller.go:444] Built service openshift-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0129 09:07:50.012643 6765 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.552250 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.552281 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.552294 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.552314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.552330 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.562338 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.575482 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.588453 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.601592 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.614062 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.628572 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.644251 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.655388 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.655440 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.655455 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.655481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.655496 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.658535 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.673891 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:39Z\\\",\\\"message\\\":\\\"2026-01-29T09:06:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125\\\\n2026-01-29T09:06:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125 to /host/opt/cni/bin/\\\\n2026-01-29T09:06:54Z [verbose] multus-daemon started\\\\n2026-01-29T09:06:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T09:07:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.689279 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.758733 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.758798 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.758810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.758833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.758846 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.801402 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:05:36.081411189 +0000 UTC Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.837164 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.837491 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:50 crc kubenswrapper[4771]: E0129 09:07:50.837589 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:50 crc kubenswrapper[4771]: E0129 09:07:50.837834 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.852901 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.862173 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.862220 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.862233 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.862253 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.862268 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.869833 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.890320 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.907796 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.921981 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.936874 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.951242 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.964564 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.965079 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.965110 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.965122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.965144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.965161 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:50Z","lastTransitionTime":"2026-01-29T09:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.984329 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb7c3cea12cc43d67d089f903a374fba4a3084947379b1a0cf6b44f1edcec9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:24Z\\\",\\\"message\\\":\\\"]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 09:07:23.983567 6470 lb_config.go:1031] Cluster endpoints for openshift-ingress-canary/ingress-canary for network=default are: map[]\\\\nI0129 09:07:23.983586 6470 services_controller.go:443] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.34\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8888, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 09:07:23.983597 6470 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0129 09:07:23.983568 6470 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 09:07:23.983751 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:50Z\\\",\\\"message\\\":\\\"ervice k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 09:07:50.011791 6765 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 09:07:50.012626 6765 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 09:07:50.011629 6765 services_controller.go:444] Built service openshift-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0129 09:07:50.012643 6765 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:50 crc kubenswrapper[4771]: I0129 09:07:50.999126 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:50Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.014492 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.027311 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.042348 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:39Z\\\",\\\"message\\\":\\\"2026-01-29T09:06:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125\\\\n2026-01-29T09:06:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125 to /host/opt/cni/bin/\\\\n2026-01-29T09:06:54Z [verbose] multus-daemon started\\\\n2026-01-29T09:06:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T09:07:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.057893 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.066672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.066766 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.066918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.066949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.066964 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.074414 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.090145 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.107059 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.123220 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.137416 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.170138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.170185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.170199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.170214 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.170224 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.273083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.273179 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.273195 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.273216 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.273230 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.376102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.376138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.376145 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.376161 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.376171 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.406471 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/3.log" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.410962 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:07:51 crc kubenswrapper[4771]: E0129 09:07:51.411299 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.427039 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.441380 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.462093 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.478341 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.480088 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.480140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.480155 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.480176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.480199 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.492680 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.509939 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:39Z\\\",\\\"message\\\":\\\"2026-01-29T09:06:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125\\\\n2026-01-29T09:06:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125 to /host/opt/cni/bin/\\\\n2026-01-29T09:06:54Z [verbose] multus-daemon started\\\\n2026-01-29T09:06:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T09:07:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.526507 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.547807 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.564168 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.580842 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.583310 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.583350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.583361 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.583378 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.583434 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.594632 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.607217 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.622150 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.636893 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.653674 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.681346 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:50Z\\\",\\\"message\\\":\\\"ervice k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 09:07:50.011791 6765 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 09:07:50.012626 6765 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 09:07:50.011629 6765 services_controller.go:444] Built service openshift-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0129 09:07:50.012643 6765 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.686676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.686774 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.686787 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.686832 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.687051 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.695475 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.710927 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.726405 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:51Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.790365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.790444 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.790482 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.790509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.790526 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.802371 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:25:22.90779568 +0000 UTC Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.837351 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:51 crc kubenswrapper[4771]: E0129 09:07:51.837528 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.837817 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:51 crc kubenswrapper[4771]: E0129 09:07:51.837900 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.893611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.893649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.893659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.893674 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.893687 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.996990 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.997075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.997089 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.997109 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:51 crc kubenswrapper[4771]: I0129 09:07:51.997125 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:51Z","lastTransitionTime":"2026-01-29T09:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.100052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.100104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.100119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.100137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.100146 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.202921 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.203008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.203027 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.203055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.203074 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.306256 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.306317 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.306328 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.306351 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.306367 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.409383 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.409456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.409466 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.409487 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.409503 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.513374 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.513420 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.513429 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.513446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.513459 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.617144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.617205 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.617218 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.617237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.617251 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.721003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.721074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.721088 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.721111 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.721128 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.802795 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:23:33.684722241 +0000 UTC Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.824407 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.824465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.824480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.824501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.824516 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.837930 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.837961 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:52 crc kubenswrapper[4771]: E0129 09:07:52.838127 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:52 crc kubenswrapper[4771]: E0129 09:07:52.838276 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.927237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.927280 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.927291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.927311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:52 crc kubenswrapper[4771]: I0129 09:07:52.927324 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:52Z","lastTransitionTime":"2026-01-29T09:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.030280 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.030366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.030376 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.030397 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.030411 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.133821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.133888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.133899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.133922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.133935 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.236774 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.236825 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.236840 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.236862 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.236876 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.340780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.340848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.340864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.340890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.340909 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.443795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.443880 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.443899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.443922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.443934 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.547525 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.547592 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.547612 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.547640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.547654 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.650529 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.650611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.650622 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.650641 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.650656 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.754165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.754297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.754314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.754366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.754384 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.803761 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:59:42.49169144 +0000 UTC Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.837153 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.837221 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:53 crc kubenswrapper[4771]: E0129 09:07:53.837343 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:53 crc kubenswrapper[4771]: E0129 09:07:53.837445 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.857176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.857241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.857250 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.857266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.857275 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.960720 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.960766 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.960775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.960814 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:53 crc kubenswrapper[4771]: I0129 09:07:53.960831 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:53Z","lastTransitionTime":"2026-01-29T09:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.064200 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.064254 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.064263 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.064281 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.064294 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.167451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.167506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.167520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.167544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.167556 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.271336 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.271384 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.271393 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.271416 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.271429 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.374563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.375138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.375153 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.375178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.375199 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.477795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.477850 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.477861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.477880 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.477895 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.580429 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.580474 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.580523 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.580540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.580548 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.683652 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.683716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.683727 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.683743 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.683757 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.787794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.787877 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.787891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.787913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.787926 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.803962 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:34:10.341446101 +0000 UTC Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.837720 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.837748 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:54 crc kubenswrapper[4771]: E0129 09:07:54.837946 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:54 crc kubenswrapper[4771]: E0129 09:07:54.838041 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.890561 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.890603 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.890615 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.890632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.890643 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.993254 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.993312 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.993349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.993376 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:54 crc kubenswrapper[4771]: I0129 09:07:54.993393 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:54Z","lastTransitionTime":"2026-01-29T09:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.095665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.095759 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.095775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.095803 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.095820 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.199242 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.199289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.199297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.199315 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.199324 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.306842 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.306883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.307183 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.307286 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.307298 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.410302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.410351 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.410361 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.410377 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.410388 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.513993 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.514058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.514150 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.514187 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.514212 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.607294 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.607357 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.607369 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.607395 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.607408 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: E0129 09:07:55.620172 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.625648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.625767 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.625781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.625802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.625849 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: E0129 09:07:55.642849 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.648289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.648341 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.648350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.648373 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.648388 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: E0129 09:07:55.664266 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.669888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.669953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.669964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.669982 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.669997 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: E0129 09:07:55.686597 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.692338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.692389 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.692404 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.692421 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.692431 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: E0129 09:07:55.705958 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5127b061-1bf1-4563-9f7f-0d3b9538d51f\\\",\\\"systemUUID\\\":\\\"b5e4e256-be21-43c8-be21-43d17dd34516\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:07:55Z is after 2025-08-24T17:21:41Z" Jan 29 09:07:55 crc kubenswrapper[4771]: E0129 09:07:55.706131 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.709039 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.709106 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.709118 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.709140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.709158 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.804725 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:51:11.180732938 +0000 UTC Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.812363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.812645 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.812887 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.813015 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.813100 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.838041 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.838101 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:55 crc kubenswrapper[4771]: E0129 09:07:55.838237 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:55 crc kubenswrapper[4771]: E0129 09:07:55.838366 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.916159 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.916230 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.916241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.916262 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:55 crc kubenswrapper[4771]: I0129 09:07:55.916274 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:55Z","lastTransitionTime":"2026-01-29T09:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.018889 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.019465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.019553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.019652 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.019777 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.123189 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.123233 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.123244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.123261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.123273 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.226289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.226340 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.226361 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.226391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.226407 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.329689 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.329771 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.329783 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.329803 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.329818 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.432633 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.432685 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.432725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.432750 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.432763 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.536265 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.536323 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.536336 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.536358 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.536374 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.639143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.639210 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.639220 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.639239 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.639252 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.742611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.742660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.742669 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.742684 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.742710 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.806124 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:16:57.094162865 +0000 UTC Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.837678 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:56 crc kubenswrapper[4771]: E0129 09:07:56.837882 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.837681 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:56 crc kubenswrapper[4771]: E0129 09:07:56.838186 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.845275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.845781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.845915 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.846022 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.846116 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.949373 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.949442 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.949458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.949482 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:56 crc kubenswrapper[4771]: I0129 09:07:56.949497 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:56Z","lastTransitionTime":"2026-01-29T09:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.052639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.053064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.053151 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.053242 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.053344 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.157010 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.157464 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.157575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.157796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.157896 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.261367 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.261411 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.261423 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.261457 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.261471 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.364570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.364626 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.364639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.364661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.364677 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.467800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.468232 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.468301 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.468375 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.468437 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.571260 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.571723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.571836 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.571939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.572023 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.675758 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.675818 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.675834 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.675857 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.675876 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.779031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.779084 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.779099 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.779118 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.779129 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.807227 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:43:30.322112905 +0000 UTC Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.836949 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:57 crc kubenswrapper[4771]: E0129 09:07:57.837172 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.837784 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:57 crc kubenswrapper[4771]: E0129 09:07:57.837905 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.881649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.881732 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.881749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.881772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.881785 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.984631 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.984681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.984726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.984748 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:57 crc kubenswrapper[4771]: I0129 09:07:57.984760 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:57Z","lastTransitionTime":"2026-01-29T09:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.087758 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.087811 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.087825 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.087844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.087857 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.191391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.191454 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.191467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.191488 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.191502 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.295419 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.295495 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.295514 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.295537 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.295554 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.398841 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.398892 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.398901 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.398920 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.398931 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.501561 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.501612 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.501625 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.501643 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.501656 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.604554 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.604616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.604628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.604649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.604660 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.707048 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.707109 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.707133 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.707159 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.707175 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.807782 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:03:52.318531171 +0000 UTC Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.810128 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.810202 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.810219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.810237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.810249 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.837210 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.837245 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:07:58 crc kubenswrapper[4771]: E0129 09:07:58.837427 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:07:58 crc kubenswrapper[4771]: E0129 09:07:58.837443 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.912839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.912893 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.912905 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.912923 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:58 crc kubenswrapper[4771]: I0129 09:07:58.912935 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:58Z","lastTransitionTime":"2026-01-29T09:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.015290 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.015352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.015371 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.015394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.015406 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.118266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.118338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.118348 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.118364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.118378 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.220865 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.220949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.220963 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.220983 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.220996 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.324112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.324189 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.324212 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.324242 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.324266 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.427628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.427677 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.427712 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.427731 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.427743 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.530670 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.530738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.530760 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.530783 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.530799 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.633933 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.633988 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.633999 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.634018 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.634032 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.736847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.736914 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.736931 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.736954 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.736971 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.808076 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:46:39.642612629 +0000 UTC Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.837537 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.837625 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:07:59 crc kubenswrapper[4771]: E0129 09:07:59.837718 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:07:59 crc kubenswrapper[4771]: E0129 09:07:59.837873 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.840655 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.840719 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.840730 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.840751 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.840765 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.943825 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.943924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.943957 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.943979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:07:59 crc kubenswrapper[4771]: I0129 09:07:59.943991 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:07:59Z","lastTransitionTime":"2026-01-29T09:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.047028 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.047074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.047084 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.047102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.047115 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.149744 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.149793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.149802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.149823 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.149834 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.252967 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.253462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.253560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.253653 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.253782 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.357166 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.357234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.357244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.357265 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.357280 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.459908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.459976 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.459989 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.460009 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.460023 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.562927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.562973 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.562982 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.563004 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.563015 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.666225 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.666272 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.666283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.666303 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.666315 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.769383 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.769431 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.769440 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.769457 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.769468 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.808983 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:36:53.188947796 +0000 UTC Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.837545 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.837786 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:00 crc kubenswrapper[4771]: E0129 09:08:00.837920 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:00 crc kubenswrapper[4771]: E0129 09:08:00.838167 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.853950 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237872a95884c0188aeae54dcd32e6e98a9b32eb04de0a6759a7c52da4508a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.871949 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09728498c055cb3e7db5cee1a7defb668e89099788c8ab4718464beedef93b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.874264 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.874306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.874315 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.874330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.874343 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.887957 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cfc8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:39Z\\\",\\\"message\\\":\\\"2026-01-29T09:06:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125\\\\n2026-01-29T09:06:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_95f8adc8-34b3-429b-9ab6-295c3429f125 to /host/opt/cni/bin/\\\\n2026-01-29T09:06:54Z [verbose] multus-daemon started\\\\n2026-01-29T09:06:54Z [verbose] Readiness Indicator file check\\\\n2026-01-29T09:07:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mkfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cfc8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.903062 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d61ce40-3dd1-4ed1-8c9d-e251d0af2987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2464ed14fadbbd224da1d17115e491529db20ce0f7cdf36421947bba61e4fd0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48d221265dac95394d94b536c7cf8058b7ced64f9be64faa5bcff5ed26adf79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb5p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vg6n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.917177 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938d1706-ae32-445f-b1b0-6cacad136ef8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts9rc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:07:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lzs9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.932230 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ac54f8-eead-444b-96a6-c0b91db93bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5b22148c3859201a10f19c7cdb699f626976a73607e75b1d3f29915c1158d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e415914021e289a082e3c8605a93117f73e24a031fc8454b364dd079d5f58e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ea42aa5e0cfd94dd126f1ecb9d5e696626e1a1b3e22674498ad7b2466fdff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28606be4c9f7ca1648949edd460b5dcf2750235f43fddbe071f409c8438cf6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.947801 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.964131 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.977627 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.977669 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.977683 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.977730 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.977745 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:00Z","lastTransitionTime":"2026-01-29T09:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:00 crc kubenswrapper[4771]: I0129 09:08:00.983346 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608731ad7840fd1ecc62c7a1907cf6da95724d949ee93cffc9b48686c51ed7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f859bde0c9719f257d6842a2c9821f43f2cf2ee9743bebdd0d3d0a2fedbfeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.000896 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ade97e27876c114537b1b7e097ad112eb4509eb18d70e21211d352e9135889b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-79kz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:00Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.019160 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f580e781-76de-491c-a6e6-7295469d366a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6e426fd0e10d6894c7c96a3317876f859ac92b0b02d8814f960b5da5714f5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635d9cc135699908a77ab43c835ee8cd423d078ad943da6598b82a1e6b8a0f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78caf4239e7922c3557467bafb797ab2968560f9c09bea4071eb534a533f7488\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1171c26e1bbe9f935defa36eb302568c5f3b1e7d6cf86b8b83378113452d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d619417e74cd210c9a90e46c49a6d650aa5338b8607200264e71affab6e25e0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f0a1cbec8677241de7e224bd7b3ad692fd70790d4729cc88c52f229086247fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0d528eae9c521840f04e39954f7828588e378ac1e5f60e84c5be3ebe86d61a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmd8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.042141 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9782d151-5b30-48e6-be05-a7457ad47864\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a655b4bf453eab530ce90c06621a47e58d4ac4341937b5a1f2ba71d11a52f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb0359a29500f424bc02d638268e866965e5485bb31dcfab092a56bbc69553f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778f2a9476eae44fb464bebb46352b27566d8a097b5b38328bcb4d10ad1046e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbc824258e9856523873e4a1a57705c57872446983cec0d4eae584eade23f36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d743f77ce6b879f436f202323388812658a5c7e9e8badce5d907c6b4ea5544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c9bc84d8856bbb2a008f285728c00295f787cd090c09bf1e31d448d48546a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16d5e83265baa5baafc44ac86626464220eff82c837aa5d7d22d0b45c9bb90d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743d6d05b12fac4cea24220d47106ed0bf4dc8c031bc8613cf94cc6625994e91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.063223 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2dade0a-90bd-47af-b039-da60ecbc514a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T09:06:42Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 09:06:36.736602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 09:06:36.739324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1906998176/tls.crt::/tmp/serving-cert-1906998176/tls.key\\\\\\\"\\\\nI0129 09:06:42.704946 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 09:06:42.707801 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 09:06:42.707820 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 09:06:42.707846 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 09:06:42.707853 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 09:06:42.717178 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0129 09:06:42.717194 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 09:06:42.717218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 09:06:42.717230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 09:06:42.717235 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 09:06:42.717241 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 09:06:42.717245 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 09:06:42.720889 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.079999 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.080059 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.080079 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.080103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.080117 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.084818 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T09:07:50Z\\\",\\\"message\\\":\\\"ervice k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 09:07:50.011791 6765 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 09:07:50.012626 6765 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 09:07:50.011629 6765 services_controller.go:444] Built service openshift-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0129 09:07:50.012643 6765 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T09:07:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rd469\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ntlqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.101279 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aef090-d50a-4a5a-8fb3-1f39ee40a68c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b794e0f5288173225f1df5ac3b1a34a3d93583506502fdf267f6aae383615d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7861703d6fcc2aab11d92848fdb677b37a882c8ba043c4c4c37cc0c660aed3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef551f4768c82dd659d83cab40cd6ecd63760bffe7fa415677376e44173e2393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.117835 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.130257 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ksdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43b02b53-246f-4869-8463-729e36aff07e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf66a480e9c428bbe957edd0c7b8b2d45b4c28be766dee860c6471659e5f4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-85w6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ksdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.143942 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"537b091a-4684-476a-9aee-507b9982b04d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c7d6778783509b02c5dae93ea64fa81343f859fc3328f719d2677a7e6cc347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38e11d9bb782c04efe1c1360248b2f247e486b3a991c16fd56d924e94d948a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.156795 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gzd9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2239f9c-5e91-409a-a0bc-680754704c77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T09:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769c2a8d392c0385601ad91944d0e5e68fd2694de55b9ca566c5d7b3d33374b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T09:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbmgv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T09:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gzd9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T09:08:01Z is after 2025-08-24T17:21:41Z" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.183148 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.183186 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.183194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.183211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.183225 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.286084 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.286146 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.286159 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.286177 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.286187 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.389057 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.389114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.389124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.389143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.389155 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.491984 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.492056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.492075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.492101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.492120 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.595726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.595786 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.595797 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.595818 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.595868 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.698622 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.698673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.698684 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.698715 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.698729 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.802205 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.802276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.802291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.802315 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.802330 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.809610 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:57:49.832990188 +0000 UTC Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.837666 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.837850 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:01 crc kubenswrapper[4771]: E0129 09:08:01.837955 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:01 crc kubenswrapper[4771]: E0129 09:08:01.838485 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.839114 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:08:01 crc kubenswrapper[4771]: E0129 09:08:01.839390 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.905817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.905861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.905871 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.905890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:01 crc kubenswrapper[4771]: I0129 09:08:01.905901 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:01Z","lastTransitionTime":"2026-01-29T09:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.009582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.010408 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.010491 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.010584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.010679 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.113892 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.113936 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.113946 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.113962 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.113974 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.217382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.217438 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.217449 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.217473 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.217493 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.320911 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.320968 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.320980 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.321001 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.321016 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.424967 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.425027 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.425037 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.425060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.425072 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.530619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.530723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.530741 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.530763 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.530785 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.633764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.633834 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.633847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.633866 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.633879 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.737655 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.737721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.737734 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.737753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.737764 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.810615 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:41:07.088962289 +0000 UTC Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.837069 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.837426 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:02 crc kubenswrapper[4771]: E0129 09:08:02.837553 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:02 crc kubenswrapper[4771]: E0129 09:08:02.837826 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.839613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.839644 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.839654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.839670 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.839681 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.942144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.942198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.942208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.942225 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:02 crc kubenswrapper[4771]: I0129 09:08:02.942240 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:02Z","lastTransitionTime":"2026-01-29T09:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.045812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.045885 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.045899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.045918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.045930 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.148964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.149014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.149029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.149055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.149071 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.252070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.252793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.252817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.252847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.252863 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.356486 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.356536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.356545 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.356563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.356574 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.459796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.459839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.459850 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.459867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.459879 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.563843 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.563912 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.563927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.563950 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.563964 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.667835 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.667893 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.667906 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.667928 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.667944 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.770292 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.770349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.770358 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.770383 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.770403 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.811535 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:32:28.059162753 +0000 UTC Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.837091 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:03 crc kubenswrapper[4771]: E0129 09:08:03.837257 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.837319 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:03 crc kubenswrapper[4771]: E0129 09:08:03.837507 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.873524 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.873586 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.873604 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.873627 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.873640 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.977103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.977167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.977181 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.977205 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:03 crc kubenswrapper[4771]: I0129 09:08:03.977218 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:03Z","lastTransitionTime":"2026-01-29T09:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.080090 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.080133 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.080144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.080160 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.080171 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.182595 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.182647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.182659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.182677 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.182688 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.286283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.286350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.286365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.286388 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.286401 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.389070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.389121 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.389135 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.389156 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.389169 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.492480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.493036 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.493131 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.493244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.493343 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.596984 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.597052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.597062 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.597089 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.597101 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.700513 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.700582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.700596 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.700619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.700688 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.803505 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.803564 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.803573 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.803600 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.803618 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.812217 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:56:13.942108697 +0000 UTC Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.837895 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.837896 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:04 crc kubenswrapper[4771]: E0129 09:08:04.838084 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:04 crc kubenswrapper[4771]: E0129 09:08:04.838154 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.907173 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.907237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.907249 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.907270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:04 crc kubenswrapper[4771]: I0129 09:08:04.907282 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:04Z","lastTransitionTime":"2026-01-29T09:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.011452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.011496 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.011505 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.011522 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.011533 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.115271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.115511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.115526 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.115549 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.115567 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.218512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.218594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.218604 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.218625 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.218648 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.325291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.325550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.325566 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.325587 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.325599 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.427833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.427872 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.427884 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.427901 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.427913 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.531248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.531499 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.531630 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.531733 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.531819 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.634350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.634413 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.634427 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.634450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.634465 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.736953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.737009 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.737021 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.737040 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.737056 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.812867 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:45:48.915718732 +0000 UTC Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.837654 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.837931 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:05 crc kubenswrapper[4771]: E0129 09:08:05.838074 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:05 crc kubenswrapper[4771]: E0129 09:08:05.838752 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.840508 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.840540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.840549 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.840563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.840573 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.943443 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.943497 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.943513 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.943536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:05 crc kubenswrapper[4771]: I0129 09:08:05.943552 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:05Z","lastTransitionTime":"2026-01-29T09:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.035607 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.035656 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.035743 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.035778 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.035794 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T09:08:06Z","lastTransitionTime":"2026-01-29T09:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.096162 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7"] Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.097084 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.100508 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.100810 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.101072 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.101108 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.139688 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=81.139664258 podStartE2EDuration="1m21.139664258s" podCreationTimestamp="2026-01-29 09:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.123053951 +0000 UTC m=+106.245894178" watchObservedRunningTime="2026-01-29 09:08:06.139664258 +0000 UTC m=+106.262504485" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.148791 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.148858 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.148943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.149002 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.149023 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.204480 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.20445054 podStartE2EDuration="39.20445054s" podCreationTimestamp="2026-01-29 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.204229213 +0000 UTC m=+106.327069450" watchObservedRunningTime="2026-01-29 09:08:06.20445054 +0000 UTC m=+106.327290777" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.233412 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gzd9l" podStartSLOduration=78.233382713 podStartE2EDuration="1m18.233382713s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.219959006 +0000 UTC m=+106.342799243" watchObservedRunningTime="2026-01-29 09:08:06.233382713 +0000 UTC m=+106.356222940" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.247592 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ksdpd" podStartSLOduration=78.247560582 podStartE2EDuration="1m18.247560582s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.23362079 +0000 UTC m=+106.356461017" watchObservedRunningTime="2026-01-29 09:08:06.247560582 +0000 UTC m=+106.370400809" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.248050 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=53.248041505 podStartE2EDuration="53.248041505s" podCreationTimestamp="2026-01-29 09:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.247791448 +0000 UTC m=+106.370631665" watchObservedRunningTime="2026-01-29 09:08:06.248041505 +0000 UTC m=+106.370881732" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.249873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.249952 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.250016 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.250047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.250102 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.250202 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.250575 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.251096 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.259226 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.276196 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2b147e0-e428-4ab3-8e7c-a9f5e4a63397-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hpsd7\" (UID: \"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.356168 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-cfc8z" podStartSLOduration=78.356137255 podStartE2EDuration="1m18.356137255s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.332712076 +0000 UTC m=+106.455552323" watchObservedRunningTime="2026-01-29 09:08:06.356137255 +0000 UTC m=+106.478977482" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.356930 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vg6n9" podStartSLOduration=77.356924217 podStartE2EDuration="1m17.356924217s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.356749932 +0000 UTC m=+106.479590159" watchObservedRunningTime="2026-01-29 09:08:06.356924217 +0000 UTC m=+106.479764444" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.409566 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=80.409535957 podStartE2EDuration="1m20.409535957s" podCreationTimestamp="2026-01-29 09:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.409527026 +0000 UTC m=+106.532367273" watchObservedRunningTime="2026-01-29 09:08:06.409535957 +0000 UTC m=+106.532376184" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.415050 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.470932 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" event={"ID":"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397","Type":"ContainerStarted","Data":"281cd24e040eea2949a04adb053b4bc8b466253fea8d26f289b39007afc124d4"} Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.495420 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=84.495388221 podStartE2EDuration="1m24.495388221s" podCreationTimestamp="2026-01-29 09:06:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.463461303 +0000 UTC m=+106.586301540" watchObservedRunningTime="2026-01-29 09:08:06.495388221 +0000 UTC m=+106.618228448" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.535360 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podStartSLOduration=78.53533255400001 podStartE2EDuration="1m18.535332554s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.534060708 +0000 UTC m=+106.656900935" watchObservedRunningTime="2026-01-29 09:08:06.535332554 +0000 UTC m=+106.658172781" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.813892 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:38:28.744760608 +0000 UTC Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.814511 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.827374 4771 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.837116 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:06 crc kubenswrapper[4771]: I0129 09:08:06.837201 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:06 crc kubenswrapper[4771]: E0129 09:08:06.837289 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:06 crc kubenswrapper[4771]: E0129 09:08:06.837389 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:07 crc kubenswrapper[4771]: I0129 09:08:07.367919 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:07 crc kubenswrapper[4771]: E0129 09:08:07.368137 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:08:07 crc kubenswrapper[4771]: E0129 09:08:07.368232 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs podName:938d1706-ae32-445f-b1b0-6cacad136ef8 nodeName:}" failed. No retries permitted until 2026-01-29 09:09:11.368211123 +0000 UTC m=+171.491051350 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs") pod "network-metrics-daemon-lzs9r" (UID: "938d1706-ae32-445f-b1b0-6cacad136ef8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 09:08:07 crc kubenswrapper[4771]: I0129 09:08:07.477363 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" event={"ID":"a2b147e0-e428-4ab3-8e7c-a9f5e4a63397","Type":"ContainerStarted","Data":"5beb15b5de89b21bd69df657daa169e3181655fc07bb9e3dc2d571825b9120ec"} Jan 29 09:08:07 crc kubenswrapper[4771]: I0129 09:08:07.496052 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kx4bn" podStartSLOduration=79.496031517 podStartE2EDuration="1m19.496031517s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:06.558507966 +0000 UTC m=+106.681348193" watchObservedRunningTime="2026-01-29 09:08:07.496031517 +0000 UTC m=+107.618871744" Jan 29 09:08:07 crc kubenswrapper[4771]: I0129 09:08:07.837408 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:07 crc kubenswrapper[4771]: E0129 09:08:07.837614 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:07 crc kubenswrapper[4771]: I0129 09:08:07.837651 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:07 crc kubenswrapper[4771]: E0129 09:08:07.838024 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:08 crc kubenswrapper[4771]: I0129 09:08:08.838175 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:08 crc kubenswrapper[4771]: I0129 09:08:08.838217 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:08 crc kubenswrapper[4771]: E0129 09:08:08.838407 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:08 crc kubenswrapper[4771]: E0129 09:08:08.838547 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:09 crc kubenswrapper[4771]: I0129 09:08:09.837888 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:09 crc kubenswrapper[4771]: E0129 09:08:09.838528 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:09 crc kubenswrapper[4771]: I0129 09:08:09.837888 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:09 crc kubenswrapper[4771]: E0129 09:08:09.839235 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:10 crc kubenswrapper[4771]: I0129 09:08:10.837147 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:10 crc kubenswrapper[4771]: I0129 09:08:10.837324 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:10 crc kubenswrapper[4771]: E0129 09:08:10.838758 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:10 crc kubenswrapper[4771]: E0129 09:08:10.839026 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:11 crc kubenswrapper[4771]: I0129 09:08:11.837585 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:11 crc kubenswrapper[4771]: E0129 09:08:11.837974 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:11 crc kubenswrapper[4771]: I0129 09:08:11.838229 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:11 crc kubenswrapper[4771]: E0129 09:08:11.838315 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:12 crc kubenswrapper[4771]: I0129 09:08:12.838348 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:12 crc kubenswrapper[4771]: E0129 09:08:12.838572 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:12 crc kubenswrapper[4771]: I0129 09:08:12.839007 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:12 crc kubenswrapper[4771]: E0129 09:08:12.839251 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:13 crc kubenswrapper[4771]: I0129 09:08:13.836936 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:13 crc kubenswrapper[4771]: I0129 09:08:13.836995 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:13 crc kubenswrapper[4771]: E0129 09:08:13.837075 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:13 crc kubenswrapper[4771]: E0129 09:08:13.837197 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:14 crc kubenswrapper[4771]: I0129 09:08:14.838191 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:14 crc kubenswrapper[4771]: I0129 09:08:14.838230 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:14 crc kubenswrapper[4771]: E0129 09:08:14.838667 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:14 crc kubenswrapper[4771]: E0129 09:08:14.838751 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:15 crc kubenswrapper[4771]: I0129 09:08:15.837315 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:15 crc kubenswrapper[4771]: E0129 09:08:15.837548 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:15 crc kubenswrapper[4771]: I0129 09:08:15.838356 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:15 crc kubenswrapper[4771]: E0129 09:08:15.838537 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:15 crc kubenswrapper[4771]: I0129 09:08:15.838609 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:08:15 crc kubenswrapper[4771]: E0129 09:08:15.838990 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:08:16 crc kubenswrapper[4771]: I0129 09:08:16.837595 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:16 crc kubenswrapper[4771]: I0129 09:08:16.837686 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:16 crc kubenswrapper[4771]: E0129 09:08:16.837813 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:16 crc kubenswrapper[4771]: E0129 09:08:16.837950 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:17 crc kubenswrapper[4771]: I0129 09:08:17.838033 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:17 crc kubenswrapper[4771]: I0129 09:08:17.838075 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:17 crc kubenswrapper[4771]: E0129 09:08:17.838249 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:17 crc kubenswrapper[4771]: E0129 09:08:17.838505 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:18 crc kubenswrapper[4771]: I0129 09:08:18.837421 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:18 crc kubenswrapper[4771]: I0129 09:08:18.837494 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:18 crc kubenswrapper[4771]: E0129 09:08:18.837619 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:18 crc kubenswrapper[4771]: E0129 09:08:18.837773 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:19 crc kubenswrapper[4771]: I0129 09:08:19.837414 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:19 crc kubenswrapper[4771]: I0129 09:08:19.837490 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:19 crc kubenswrapper[4771]: E0129 09:08:19.837635 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:19 crc kubenswrapper[4771]: E0129 09:08:19.837977 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:20 crc kubenswrapper[4771]: E0129 09:08:20.766094 4771 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 09:08:20 crc kubenswrapper[4771]: I0129 09:08:20.838047 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:20 crc kubenswrapper[4771]: I0129 09:08:20.838122 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:20 crc kubenswrapper[4771]: E0129 09:08:20.839337 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:20 crc kubenswrapper[4771]: E0129 09:08:20.839500 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:21 crc kubenswrapper[4771]: E0129 09:08:21.066324 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 09:08:21 crc kubenswrapper[4771]: I0129 09:08:21.837251 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:21 crc kubenswrapper[4771]: I0129 09:08:21.837274 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:21 crc kubenswrapper[4771]: E0129 09:08:21.837418 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:21 crc kubenswrapper[4771]: E0129 09:08:21.837548 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:22 crc kubenswrapper[4771]: I0129 09:08:22.837063 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:22 crc kubenswrapper[4771]: I0129 09:08:22.837248 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:22 crc kubenswrapper[4771]: E0129 09:08:22.837396 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:22 crc kubenswrapper[4771]: E0129 09:08:22.837525 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:23 crc kubenswrapper[4771]: I0129 09:08:23.837108 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:23 crc kubenswrapper[4771]: I0129 09:08:23.837181 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:23 crc kubenswrapper[4771]: E0129 09:08:23.838105 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:23 crc kubenswrapper[4771]: E0129 09:08:23.838148 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:24 crc kubenswrapper[4771]: I0129 09:08:24.837825 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:24 crc kubenswrapper[4771]: I0129 09:08:24.837889 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:24 crc kubenswrapper[4771]: E0129 09:08:24.838042 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:24 crc kubenswrapper[4771]: E0129 09:08:24.838190 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:25 crc kubenswrapper[4771]: I0129 09:08:25.837097 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:25 crc kubenswrapper[4771]: I0129 09:08:25.837194 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:25 crc kubenswrapper[4771]: E0129 09:08:25.837359 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:25 crc kubenswrapper[4771]: E0129 09:08:25.837554 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:26 crc kubenswrapper[4771]: E0129 09:08:26.068318 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.547149 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/1.log" Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.547620 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/0.log" Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.547658 4771 generic.go:334] "Generic (PLEG): container finished" podID="a46c7969-6ce3-4ba5-a1ab-73bbf487ae73" containerID="2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3" exitCode=1 Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.547710 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cfc8z" event={"ID":"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73","Type":"ContainerDied","Data":"2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3"} Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.547755 4771 scope.go:117] "RemoveContainer" containerID="1e440af9d3bef363b93b4f7dadfd8b2f2d8d68bef32cbf5b7482247dc83abc4c" Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.548212 4771 scope.go:117] "RemoveContainer" containerID="2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3" Jan 29 09:08:26 crc kubenswrapper[4771]: E0129 09:08:26.548382 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-cfc8z_openshift-multus(a46c7969-6ce3-4ba5-a1ab-73bbf487ae73)\"" pod="openshift-multus/multus-cfc8z" podUID="a46c7969-6ce3-4ba5-a1ab-73bbf487ae73" Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.575132 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hpsd7" podStartSLOduration=98.575114819 podStartE2EDuration="1m38.575114819s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:07.496411038 +0000 UTC m=+107.619251295" watchObservedRunningTime="2026-01-29 09:08:26.575114819 +0000 UTC m=+126.697955046" Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.837016 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:26 crc kubenswrapper[4771]: E0129 09:08:26.837199 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:26 crc kubenswrapper[4771]: I0129 09:08:26.837389 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:26 crc kubenswrapper[4771]: E0129 09:08:26.837646 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:27 crc kubenswrapper[4771]: I0129 09:08:27.552973 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/1.log" Jan 29 09:08:27 crc kubenswrapper[4771]: I0129 09:08:27.837582 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:27 crc kubenswrapper[4771]: I0129 09:08:27.837722 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:27 crc kubenswrapper[4771]: E0129 09:08:27.837830 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:27 crc kubenswrapper[4771]: E0129 09:08:27.837905 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:28 crc kubenswrapper[4771]: I0129 09:08:28.837170 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:28 crc kubenswrapper[4771]: I0129 09:08:28.837565 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:28 crc kubenswrapper[4771]: E0129 09:08:28.837657 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:28 crc kubenswrapper[4771]: E0129 09:08:28.837820 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:28 crc kubenswrapper[4771]: I0129 09:08:28.837987 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:08:28 crc kubenswrapper[4771]: E0129 09:08:28.838178 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ntlqb_openshift-ovn-kubernetes(ff7f16f4-439f-4743-b5f2-b9c6f6c346f5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" Jan 29 09:08:29 crc kubenswrapper[4771]: I0129 09:08:29.838019 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:29 crc kubenswrapper[4771]: I0129 09:08:29.838022 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:29 crc kubenswrapper[4771]: E0129 09:08:29.838263 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:29 crc kubenswrapper[4771]: E0129 09:08:29.838330 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:30 crc kubenswrapper[4771]: I0129 09:08:30.837435 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:30 crc kubenswrapper[4771]: I0129 09:08:30.837529 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:30 crc kubenswrapper[4771]: E0129 09:08:30.838846 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:30 crc kubenswrapper[4771]: E0129 09:08:30.839008 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:31 crc kubenswrapper[4771]: E0129 09:08:31.069065 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 09:08:31 crc kubenswrapper[4771]: I0129 09:08:31.837868 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:31 crc kubenswrapper[4771]: I0129 09:08:31.837863 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:31 crc kubenswrapper[4771]: E0129 09:08:31.838016 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:31 crc kubenswrapper[4771]: E0129 09:08:31.838163 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:32 crc kubenswrapper[4771]: I0129 09:08:32.837359 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:32 crc kubenswrapper[4771]: I0129 09:08:32.837365 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:32 crc kubenswrapper[4771]: E0129 09:08:32.837576 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:32 crc kubenswrapper[4771]: E0129 09:08:32.837747 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:33 crc kubenswrapper[4771]: I0129 09:08:33.837326 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:33 crc kubenswrapper[4771]: E0129 09:08:33.837532 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:33 crc kubenswrapper[4771]: I0129 09:08:33.837326 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:33 crc kubenswrapper[4771]: E0129 09:08:33.837803 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:34 crc kubenswrapper[4771]: I0129 09:08:34.837925 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:34 crc kubenswrapper[4771]: I0129 09:08:34.837925 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:34 crc kubenswrapper[4771]: E0129 09:08:34.838096 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:34 crc kubenswrapper[4771]: E0129 09:08:34.838159 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:35 crc kubenswrapper[4771]: I0129 09:08:35.837634 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:35 crc kubenswrapper[4771]: E0129 09:08:35.837809 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:35 crc kubenswrapper[4771]: I0129 09:08:35.837838 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:35 crc kubenswrapper[4771]: E0129 09:08:35.838121 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:36 crc kubenswrapper[4771]: E0129 09:08:36.071124 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 09:08:36 crc kubenswrapper[4771]: I0129 09:08:36.838851 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:36 crc kubenswrapper[4771]: I0129 09:08:36.838933 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:36 crc kubenswrapper[4771]: E0129 09:08:36.839010 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:36 crc kubenswrapper[4771]: E0129 09:08:36.839119 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:37 crc kubenswrapper[4771]: I0129 09:08:37.837863 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:37 crc kubenswrapper[4771]: E0129 09:08:37.838069 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:37 crc kubenswrapper[4771]: I0129 09:08:37.838350 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:37 crc kubenswrapper[4771]: E0129 09:08:37.838563 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:38 crc kubenswrapper[4771]: I0129 09:08:38.837884 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:38 crc kubenswrapper[4771]: E0129 09:08:38.838061 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:38 crc kubenswrapper[4771]: I0129 09:08:38.838339 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:38 crc kubenswrapper[4771]: E0129 09:08:38.838555 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:39 crc kubenswrapper[4771]: I0129 09:08:39.837404 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:39 crc kubenswrapper[4771]: E0129 09:08:39.837666 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:39 crc kubenswrapper[4771]: I0129 09:08:39.838482 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:39 crc kubenswrapper[4771]: E0129 09:08:39.838788 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:40 crc kubenswrapper[4771]: I0129 09:08:40.837789 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:40 crc kubenswrapper[4771]: I0129 09:08:40.838037 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:40 crc kubenswrapper[4771]: I0129 09:08:40.838402 4771 scope.go:117] "RemoveContainer" containerID="2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3" Jan 29 09:08:40 crc kubenswrapper[4771]: E0129 09:08:40.841314 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:40 crc kubenswrapper[4771]: E0129 09:08:40.842844 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:41 crc kubenswrapper[4771]: E0129 09:08:41.071582 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 09:08:41 crc kubenswrapper[4771]: I0129 09:08:41.601581 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/1.log" Jan 29 09:08:41 crc kubenswrapper[4771]: I0129 09:08:41.601663 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cfc8z" event={"ID":"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73","Type":"ContainerStarted","Data":"6993705b1fbc7657dfd5e05501f942071bb1210c0422fce04cd3c57968286297"} Jan 29 09:08:41 crc kubenswrapper[4771]: I0129 09:08:41.837984 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:41 crc kubenswrapper[4771]: E0129 09:08:41.838164 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:41 crc kubenswrapper[4771]: I0129 09:08:41.838014 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:41 crc kubenswrapper[4771]: E0129 09:08:41.838959 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:42 crc kubenswrapper[4771]: I0129 09:08:42.837944 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:42 crc kubenswrapper[4771]: I0129 09:08:42.838077 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:42 crc kubenswrapper[4771]: E0129 09:08:42.838105 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:42 crc kubenswrapper[4771]: E0129 09:08:42.838259 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:43 crc kubenswrapper[4771]: I0129 09:08:43.837740 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:43 crc kubenswrapper[4771]: I0129 09:08:43.837835 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:43 crc kubenswrapper[4771]: E0129 09:08:43.837943 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:43 crc kubenswrapper[4771]: I0129 09:08:43.837990 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:08:43 crc kubenswrapper[4771]: E0129 09:08:43.838091 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:44 crc kubenswrapper[4771]: I0129 09:08:44.614082 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/3.log" Jan 29 09:08:44 crc kubenswrapper[4771]: I0129 09:08:44.617102 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerStarted","Data":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} Jan 29 09:08:44 crc kubenswrapper[4771]: I0129 09:08:44.617936 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:08:44 crc kubenswrapper[4771]: I0129 09:08:44.837010 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:44 crc kubenswrapper[4771]: I0129 09:08:44.837063 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:44 crc kubenswrapper[4771]: E0129 09:08:44.837214 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:44 crc kubenswrapper[4771]: E0129 09:08:44.837583 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:45 crc kubenswrapper[4771]: I0129 09:08:45.024055 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podStartSLOduration=117.024025968 podStartE2EDuration="1m57.024025968s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:08:44.65671315 +0000 UTC m=+144.779553377" watchObservedRunningTime="2026-01-29 09:08:45.024025968 +0000 UTC m=+145.146866195" Jan 29 09:08:45 crc kubenswrapper[4771]: I0129 09:08:45.024611 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lzs9r"] Jan 29 09:08:45 crc kubenswrapper[4771]: I0129 09:08:45.620992 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:45 crc kubenswrapper[4771]: E0129 09:08:45.622080 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:45 crc kubenswrapper[4771]: I0129 09:08:45.837135 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:45 crc kubenswrapper[4771]: I0129 09:08:45.837205 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:45 crc kubenswrapper[4771]: E0129 09:08:45.837383 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:45 crc kubenswrapper[4771]: E0129 09:08:45.838167 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:46 crc kubenswrapper[4771]: E0129 09:08:46.073659 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 09:08:46 crc kubenswrapper[4771]: I0129 09:08:46.837341 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:46 crc kubenswrapper[4771]: E0129 09:08:46.837545 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:46 crc kubenswrapper[4771]: I0129 09:08:46.837756 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:46 crc kubenswrapper[4771]: E0129 09:08:46.837998 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:47 crc kubenswrapper[4771]: I0129 09:08:47.837125 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:47 crc kubenswrapper[4771]: I0129 09:08:47.837255 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:47 crc kubenswrapper[4771]: E0129 09:08:47.837374 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:47 crc kubenswrapper[4771]: E0129 09:08:47.837475 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:48 crc kubenswrapper[4771]: I0129 09:08:48.837492 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:48 crc kubenswrapper[4771]: E0129 09:08:48.837751 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:48 crc kubenswrapper[4771]: I0129 09:08:48.837983 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:48 crc kubenswrapper[4771]: E0129 09:08:48.838205 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:49 crc kubenswrapper[4771]: I0129 09:08:49.837993 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:49 crc kubenswrapper[4771]: I0129 09:08:49.837999 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:49 crc kubenswrapper[4771]: E0129 09:08:49.838256 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:08:49 crc kubenswrapper[4771]: E0129 09:08:49.838322 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:08:50 crc kubenswrapper[4771]: I0129 09:08:50.837355 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:50 crc kubenswrapper[4771]: I0129 09:08:50.837391 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:50 crc kubenswrapper[4771]: E0129 09:08:50.839459 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:08:50 crc kubenswrapper[4771]: E0129 09:08:50.839766 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lzs9r" podUID="938d1706-ae32-445f-b1b0-6cacad136ef8" Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.592912 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.593115 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593224 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:10:53.59318037 +0000 UTC m=+273.716020597 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593278 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593299 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593312 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.593393 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593402 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:10:53.593382906 +0000 UTC m=+273.716223133 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.593443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.593485 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593625 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593658 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593711 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:10:53.593691874 +0000 UTC m=+273.716532101 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593834 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:10:53.593805498 +0000 UTC m=+273.716645905 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593875 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593958 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.593983 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:08:51 crc kubenswrapper[4771]: E0129 09:08:51.594092 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:10:53.594056255 +0000 UTC m=+273.716896682 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.837910 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.838182 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.840069 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.840249 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.842610 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 09:08:51 crc kubenswrapper[4771]: I0129 09:08:51.842658 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 09:08:52 crc kubenswrapper[4771]: I0129 09:08:52.837872 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:08:52 crc kubenswrapper[4771]: I0129 09:08:52.838177 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:08:52 crc kubenswrapper[4771]: I0129 09:08:52.841005 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 09:08:52 crc kubenswrapper[4771]: I0129 09:08:52.841159 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.176244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.213993 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.214361 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.217363 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.217422 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.218043 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.218189 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.221121 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.221680 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.222017 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rsp6c"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.222119 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.222153 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.222736 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.225296 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9qbzj"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.226159 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ztsgd"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.226801 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.227353 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.227517 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.227551 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.227733 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.229418 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.229633 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.229799 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.230049 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.230179 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.230291 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.230405 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.230863 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.232177 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.232880 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.233109 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.233230 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.234243 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.234433 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.234576 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.234711 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.234903 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.235010 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.235115 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.235236 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.235366 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.235555 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.235730 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.236177 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.236286 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.236392 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.236513 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.237336 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.237530 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.238417 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.238800 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.239290 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.239585 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.239725 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.239872 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.241531 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.247759 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.249021 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.260581 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ld9n"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.272968 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.273841 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.274800 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.274926 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.275666 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nn2z2"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.275742 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.276289 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.278980 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.279661 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.286933 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.287138 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.287303 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.287330 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.287471 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.287707 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.287922 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.288109 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.288181 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.288267 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.288335 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.290192 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.290416 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.290565 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.290715 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.292988 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.293162 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.293165 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.296718 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.297874 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.298030 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.298159 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.298344 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.298596 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.298717 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.298840 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.298921 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.299008 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.299105 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.299198 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.303101 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.306238 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.307647 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.307996 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.311674 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jzc5h"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.312256 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.312495 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.316056 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ztsgd"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.316152 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.316610 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.320066 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9qbzj"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.334422 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jcdkc"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.335765 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.341016 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9vbh"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.341854 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.349649 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.349816 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.349898 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.350035 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.350117 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.349678 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.350459 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.350541 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.350334 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.350926 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.351824 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.352049 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5xjmf"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.359663 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.359915 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.361412 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.362385 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.365968 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.366449 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.366755 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.384622 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.384736 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385075 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385115 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb98z\" (UniqueName: \"kubernetes.io/projected/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-kube-api-access-sb98z\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385149 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385166 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-encryption-config\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385185 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6sc\" (UniqueName: \"kubernetes.io/projected/87128957-2efe-44be-bfa2-a5dc8a251453-kube-api-access-6x6sc\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385214 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f6553555-ed45-445a-a12a-c332f7d8ac0e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385234 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/acd89578-60c3-4368-9b2c-59dc899d1a08-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385252 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-config\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385274 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385291 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-service-ca-bundle\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385308 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385324 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-config\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-config\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385365 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385386 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wt2x\" (UniqueName: \"kubernetes.io/projected/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-kube-api-access-2wt2x\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385403 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385424 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385448 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxr6v\" (UniqueName: \"kubernetes.io/projected/47cda8f8-ad10-4898-a066-1c388df82ab4-kube-api-access-zxr6v\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385470 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385491 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb03e2-b241-4df4-8295-33b5ad3eae58-audit-dir\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385509 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvk2\" (UniqueName: \"kubernetes.io/projected/acd89578-60c3-4368-9b2c-59dc899d1a08-kube-api-access-ckvk2\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385531 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87128957-2efe-44be-bfa2-a5dc8a251453-config\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385559 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47cda8f8-ad10-4898-a066-1c388df82ab4-node-pullsecrets\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/acd89578-60c3-4368-9b2c-59dc899d1a08-images\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385607 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-config\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385629 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385651 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvg8r\" (UniqueName: \"kubernetes.io/projected/bfcb03e2-b241-4df4-8295-33b5ad3eae58-kube-api-access-xvg8r\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385669 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6553555-ed45-445a-a12a-c332f7d8ac0e-serving-cert\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385712 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385734 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87128957-2efe-44be-bfa2-a5dc8a251453-auth-proxy-config\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385753 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-etcd-serving-ca\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385772 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvr4x\" (UniqueName: \"kubernetes.io/projected/13594924-bb90-4488-84c7-2046b323e219-kube-api-access-hvr4x\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385791 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f07f08-7620-440d-81b8-39fdb35d84a3-serving-cert\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385811 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385831 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd89578-60c3-4368-9b2c-59dc899d1a08-config\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385849 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385881 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-policies\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385911 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385932 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-client-ca\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385953 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/87128957-2efe-44be-bfa2-a5dc8a251453-machine-approver-tls\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385973 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-image-import-ca\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385997 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kblxb\" (UniqueName: \"kubernetes.io/projected/30d901bc-be28-4ddc-b46f-05fffb35ec40-kube-api-access-kblxb\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386032 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgsrk\" (UniqueName: \"kubernetes.io/projected/55f07f08-7620-440d-81b8-39fdb35d84a3-kube-api-access-lgsrk\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386087 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386106 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386133 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-etcd-client\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386152 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386173 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8t9p\" (UniqueName: \"kubernetes.io/projected/7cc18dde-121c-4f72-a2a1-25c7a5559a5a-kube-api-access-d8t9p\") pod \"cluster-samples-operator-665b6dd947-kbglz\" (UID: \"7cc18dde-121c-4f72-a2a1-25c7a5559a5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386206 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386232 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-audit-policies\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386252 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-etcd-client\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386271 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-client-ca\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386293 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-encryption-config\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386315 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-serving-cert\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386331 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13594924-bb90-4488-84c7-2046b323e219-serving-cert\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386349 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-serving-cert\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386366 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-audit\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386381 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47cda8f8-ad10-4898-a066-1c388df82ab4-audit-dir\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386397 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-dir\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386412 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cc18dde-121c-4f72-a2a1-25c7a5559a5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kbglz\" (UID: \"7cc18dde-121c-4f72-a2a1-25c7a5559a5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpqr5\" (UniqueName: \"kubernetes.io/projected/f6553555-ed45-445a-a12a-c332f7d8ac0e-kube-api-access-vpqr5\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.386469 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-serving-cert\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.385091 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.388500 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.387383 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.389283 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.389832 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.390154 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.390339 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.390439 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.390377 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.397796 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.398805 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.399989 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.406152 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.407155 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.407162 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.408024 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.408183 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.409397 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.410683 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bdcnw"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.411818 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.414656 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-kxgbp"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.415955 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.416948 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.417070 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.417172 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.418058 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.418579 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.418609 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.419250 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wszgv"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.419883 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.420912 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.421470 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.422602 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.424427 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.425293 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqgdw"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.425688 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.426006 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.426142 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.433424 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wzrzm"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.434565 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.435867 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.436829 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.436906 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.439645 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mbqr8"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.440640 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.441401 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-95rng"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.457840 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.460009 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.461618 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.463520 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.465158 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.465808 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.467164 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.471969 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ld9n"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.472028 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.472515 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.476848 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.487438 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.488934 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-encryption-config\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489003 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-serving-cert\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489039 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13594924-bb90-4488-84c7-2046b323e219-serving-cert\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489075 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-serving-cert\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489107 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-audit\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489135 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47cda8f8-ad10-4898-a066-1c388df82ab4-audit-dir\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489163 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-dir\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489189 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cc18dde-121c-4f72-a2a1-25c7a5559a5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kbglz\" (UID: \"7cc18dde-121c-4f72-a2a1-25c7a5559a5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489235 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpqr5\" (UniqueName: \"kubernetes.io/projected/f6553555-ed45-445a-a12a-c332f7d8ac0e-kube-api-access-vpqr5\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489269 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-serving-cert\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489304 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb98z\" (UniqueName: \"kubernetes.io/projected/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-kube-api-access-sb98z\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489366 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489403 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-encryption-config\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489434 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x6sc\" (UniqueName: \"kubernetes.io/projected/87128957-2efe-44be-bfa2-a5dc8a251453-kube-api-access-6x6sc\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489463 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f6553555-ed45-445a-a12a-c332f7d8ac0e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489491 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/acd89578-60c3-4368-9b2c-59dc899d1a08-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489522 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-config\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489550 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489577 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-service-ca-bundle\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489622 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-config\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489676 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-config\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489734 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489770 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wt2x\" (UniqueName: \"kubernetes.io/projected/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-kube-api-access-2wt2x\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489796 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489821 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489846 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxr6v\" (UniqueName: \"kubernetes.io/projected/47cda8f8-ad10-4898-a066-1c388df82ab4-kube-api-access-zxr6v\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489878 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489910 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb03e2-b241-4df4-8295-33b5ad3eae58-audit-dir\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489929 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckvk2\" (UniqueName: \"kubernetes.io/projected/acd89578-60c3-4368-9b2c-59dc899d1a08-kube-api-access-ckvk2\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489959 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87128957-2efe-44be-bfa2-a5dc8a251453-config\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.489989 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47cda8f8-ad10-4898-a066-1c388df82ab4-node-pullsecrets\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.490020 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/acd89578-60c3-4368-9b2c-59dc899d1a08-images\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.490430 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f6553555-ed45-445a-a12a-c332f7d8ac0e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.491615 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfcb03e2-b241-4df4-8295-33b5ad3eae58-audit-dir\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.493058 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87128957-2efe-44be-bfa2-a5dc8a251453-config\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.493222 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/acd89578-60c3-4368-9b2c-59dc899d1a08-images\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.493319 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47cda8f8-ad10-4898-a066-1c388df82ab4-node-pullsecrets\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.493528 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.493632 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47cda8f8-ad10-4898-a066-1c388df82ab4-audit-dir\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.495473 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-dir\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.497806 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.498282 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.498485 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.499102 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13594924-bb90-4488-84c7-2046b323e219-serving-cert\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.499267 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.499905 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-audit\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.500013 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-service-ca-bundle\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.500612 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.500966 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-config\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.501926 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-config\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502120 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-config\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502274 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502357 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvg8r\" (UniqueName: \"kubernetes.io/projected/bfcb03e2-b241-4df4-8295-33b5ad3eae58-kube-api-access-xvg8r\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502454 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6553555-ed45-445a-a12a-c332f7d8ac0e-serving-cert\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502546 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502661 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87128957-2efe-44be-bfa2-a5dc8a251453-auth-proxy-config\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502780 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-etcd-serving-ca\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502972 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvr4x\" (UniqueName: \"kubernetes.io/projected/13594924-bb90-4488-84c7-2046b323e219-kube-api-access-hvr4x\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.503060 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f07f08-7620-440d-81b8-39fdb35d84a3-serving-cert\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.503150 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.503191 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.503319 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd89578-60c3-4368-9b2c-59dc899d1a08-config\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.503427 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.503496 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-config\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.503787 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-policies\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504028 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504132 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-client-ca\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/87128957-2efe-44be-bfa2-a5dc8a251453-machine-approver-tls\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504330 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-image-import-ca\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504423 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kblxb\" (UniqueName: \"kubernetes.io/projected/30d901bc-be28-4ddc-b46f-05fffb35ec40-kube-api-access-kblxb\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504572 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504779 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgsrk\" (UniqueName: \"kubernetes.io/projected/55f07f08-7620-440d-81b8-39fdb35d84a3-kube-api-access-lgsrk\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504917 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505032 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505140 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-etcd-client\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505232 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505319 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8t9p\" (UniqueName: \"kubernetes.io/projected/7cc18dde-121c-4f72-a2a1-25c7a5559a5a-kube-api-access-d8t9p\") pod \"cluster-samples-operator-665b6dd947-kbglz\" (UID: \"7cc18dde-121c-4f72-a2a1-25c7a5559a5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505409 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505530 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd89578-60c3-4368-9b2c-59dc899d1a08-config\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502801 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-encryption-config\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505620 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-policies\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505645 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-client-ca\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505492 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-audit-policies\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.505927 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-etcd-client\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.506055 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-client-ca\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.506139 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504228 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-etcd-serving-ca\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.506484 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504437 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/acd89578-60c3-4368-9b2c-59dc899d1a08-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.502669 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13594924-bb90-4488-84c7-2046b323e219-config\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.504972 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87128957-2efe-44be-bfa2-a5dc8a251453-auth-proxy-config\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.506959 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-serving-cert\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.508825 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.509657 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-image-import-ca\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.511674 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.512039 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-encryption-config\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.512119 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfcb03e2-b241-4df4-8295-33b5ad3eae58-audit-policies\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.512419 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-serving-cert\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.512449 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-serving-cert\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.512650 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.512817 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.513080 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9vbh"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.513228 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.513669 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jzc5h"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.514021 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.514209 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47cda8f8-ad10-4898-a066-1c388df82ab4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.514831 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47cda8f8-ad10-4898-a066-1c388df82ab4-etcd-client\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.515337 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.515383 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfcb03e2-b241-4df4-8295-33b5ad3eae58-etcd-client\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.515769 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5xjmf"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.516228 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.516325 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7cc18dde-121c-4f72-a2a1-25c7a5559a5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kbglz\" (UID: \"7cc18dde-121c-4f72-a2a1-25c7a5559a5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.516360 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6553555-ed45-445a-a12a-c332f7d8ac0e-serving-cert\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.516919 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.517240 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f07f08-7620-440d-81b8-39fdb35d84a3-serving-cert\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.517829 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.518566 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/87128957-2efe-44be-bfa2-a5dc8a251453-machine-approver-tls\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.519201 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.521049 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.522068 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.522962 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.524466 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.526331 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-client-ca\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.526377 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.528342 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.530050 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nn2z2"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.531739 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.533002 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cpgl8"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.534544 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqgdw"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.534751 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cpgl8" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.535434 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.537350 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.546553 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.548206 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.549484 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.556276 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wszgv"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.556943 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.558400 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.559166 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.569546 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.570479 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kxgbp"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.572255 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.573404 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.574464 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mbqr8"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.575944 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rsp6c"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.576392 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.576618 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bdcnw"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.577667 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.578792 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jcdkc"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.580875 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cpgl8"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.582796 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h84vs"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.585075 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-w8f4k"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.585252 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.586661 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.589916 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h84vs"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.591721 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w8f4k"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.593376 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r"] Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.596555 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.616942 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.638264 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.668959 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.677355 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.696924 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.717963 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.737602 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.757305 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.777304 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.797504 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.836637 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.857802 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.877628 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.898352 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.917837 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.938230 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.957823 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.978469 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 09:08:57 crc kubenswrapper[4771]: I0129 09:08:57.998261 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.021844 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.038050 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.057787 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.077747 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.097779 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.118290 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.137909 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.158231 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.177969 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.197797 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.216725 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.238428 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.257462 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.278125 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.297713 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.317837 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.339251 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.358260 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.378314 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.397747 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.416795 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.434963 4771 request.go:700] Waited for 1.00829194s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.437638 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.457532 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.477580 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.496507 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.517819 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.543881 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.557160 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.578049 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.597760 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.617118 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.637229 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.657479 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.677380 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.697041 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.716888 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.739060 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.757077 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.777067 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.797600 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.817560 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.857247 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.877581 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.898061 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.917587 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.937437 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.956391 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.977234 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 09:08:58 crc kubenswrapper[4771]: I0129 09:08:58.997318 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.017036 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.037055 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.058126 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.077995 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.098863 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.117604 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.139959 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.172384 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckvk2\" (UniqueName: \"kubernetes.io/projected/acd89578-60c3-4368-9b2c-59dc899d1a08-kube-api-access-ckvk2\") pod \"machine-api-operator-5694c8668f-rsp6c\" (UID: \"acd89578-60c3-4368-9b2c-59dc899d1a08\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.191641 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpqr5\" (UniqueName: \"kubernetes.io/projected/f6553555-ed45-445a-a12a-c332f7d8ac0e-kube-api-access-vpqr5\") pod \"openshift-config-operator-7777fb866f-qdxtj\" (UID: \"f6553555-ed45-445a-a12a-c332f7d8ac0e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.212442 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxr6v\" (UniqueName: \"kubernetes.io/projected/47cda8f8-ad10-4898-a066-1c388df82ab4-kube-api-access-zxr6v\") pod \"apiserver-76f77b778f-ztsgd\" (UID: \"47cda8f8-ad10-4898-a066-1c388df82ab4\") " pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.234354 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wt2x\" (UniqueName: \"kubernetes.io/projected/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-kube-api-access-2wt2x\") pod \"controller-manager-879f6c89f-5ld9n\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.253474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb98z\" (UniqueName: \"kubernetes.io/projected/fa5d10fe-44e0-4910-aa58-693cd50e8ab4-kube-api-access-sb98z\") pod \"openshift-apiserver-operator-796bbdcf4f-5nllz\" (UID: \"fa5d10fe-44e0-4910-aa58-693cd50e8ab4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.265394 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.272963 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x6sc\" (UniqueName: \"kubernetes.io/projected/87128957-2efe-44be-bfa2-a5dc8a251453-kube-api-access-6x6sc\") pod \"machine-approver-56656f9798-92tqz\" (UID: \"87128957-2efe-44be-bfa2-a5dc8a251453\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.294638 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvg8r\" (UniqueName: \"kubernetes.io/projected/bfcb03e2-b241-4df4-8295-33b5ad3eae58-kube-api-access-xvg8r\") pod \"apiserver-7bbb656c7d-dtnz2\" (UID: \"bfcb03e2-b241-4df4-8295-33b5ad3eae58\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.325538 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvr4x\" (UniqueName: \"kubernetes.io/projected/13594924-bb90-4488-84c7-2046b323e219-kube-api-access-hvr4x\") pod \"authentication-operator-69f744f599-9qbzj\" (UID: \"13594924-bb90-4488-84c7-2046b323e219\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.341903 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgsrk\" (UniqueName: \"kubernetes.io/projected/55f07f08-7620-440d-81b8-39fdb35d84a3-kube-api-access-lgsrk\") pod \"route-controller-manager-6576b87f9c-f9qrd\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.356375 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kblxb\" (UniqueName: \"kubernetes.io/projected/30d901bc-be28-4ddc-b46f-05fffb35ec40-kube-api-access-kblxb\") pod \"oauth-openshift-558db77b4-nn2z2\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.376671 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.377857 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8t9p\" (UniqueName: \"kubernetes.io/projected/7cc18dde-121c-4f72-a2a1-25c7a5559a5a-kube-api-access-d8t9p\") pod \"cluster-samples-operator-665b6dd947-kbglz\" (UID: \"7cc18dde-121c-4f72-a2a1-25c7a5559a5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.392363 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.399054 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.399426 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.404972 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.416656 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.435415 4771 request.go:700] Waited for 1.900223162s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.435527 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.437300 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.455047 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.459380 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.478737 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.487161 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj"] Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.500186 4771 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.541473 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.541976 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.542834 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.545272 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.545320 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 09:08:59 crc kubenswrapper[4771]: W0129 09:08:59.557049 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6553555_ed45_445a_a12a_c332f7d8ac0e.slice/crio-18296ad090c4abc1133a7096ac6f39aa0015efad5e56f8323ea0cef1df7f460a WatchSource:0}: Error finding container 18296ad090c4abc1133a7096ac6f39aa0015efad5e56f8323ea0cef1df7f460a: Status 404 returned error can't find the container with id 18296ad090c4abc1133a7096ac6f39aa0015efad5e56f8323ea0cef1df7f460a Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.557414 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.583711 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.612169 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.643951 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-config\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/796d2283-9389-48d3-9e0a-e71cf4f58ce1-metrics-tls\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644148 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-service-ca\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644190 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9e54e5c2-f953-4249-9586-56c08b2e7631-signing-cabundle\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644247 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-bound-sa-token\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644279 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd247d04-6c97-4819-a31d-f1f7eb95d40a-config\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644322 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/796d2283-9389-48d3-9e0a-e71cf4f58ce1-trusted-ca\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644345 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-serving-cert\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644394 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7z9\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-kube-api-access-gn7z9\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644431 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9e54e5c2-f953-4249-9586-56c08b2e7631-signing-key\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644459 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk7dc\" (UniqueName: \"kubernetes.io/projected/e6a19255-f81c-4f70-817b-0450703e4962-kube-api-access-kk7dc\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644486 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd247d04-6c97-4819-a31d-f1f7eb95d40a-trusted-ca\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644532 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/796d2283-9389-48d3-9e0a-e71cf4f58ce1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644639 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-certificates\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644662 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-oauth-config\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644685 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnrg\" (UniqueName: \"kubernetes.io/projected/2fd142c7-125b-41ad-a645-c1eac4caa96b-kube-api-access-mfnrg\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644939 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cdgh\" (UniqueName: \"kubernetes.io/projected/bf5607c1-6059-4a83-b10b-28de3d1b872a-kube-api-access-7cdgh\") pod \"migrator-59844c95c7-28dgh\" (UID: \"bf5607c1-6059-4a83-b10b-28de3d1b872a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.644971 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd247d04-6c97-4819-a31d-f1f7eb95d40a-serving-cert\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645206 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-trusted-ca-bundle\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645261 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-oauth-serving-cert\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645290 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42554\" (UniqueName: \"kubernetes.io/projected/9e54e5c2-f953-4249-9586-56c08b2e7631-kube-api-access-42554\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645425 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a19255-f81c-4f70-817b-0450703e4962-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645471 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645548 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645611 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef9891b-0ec1-4469-8701-57d0831b4046-config\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645773 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-tls\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645834 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef9891b-0ec1-4469-8701-57d0831b4046-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.645894 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-trusted-ca\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.646003 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-config\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: E0129 09:08:59.646847 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.146820336 +0000 UTC m=+160.269660563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.648063 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe9ebbbe-af6e-409d-8039-db5fb66d062b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.648117 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ef9891b-0ec1-4469-8701-57d0831b4046-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.648151 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tpb\" (UniqueName: \"kubernetes.io/projected/796d2283-9389-48d3-9e0a-e71cf4f58ce1-kube-api-access-j9tpb\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.648182 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6a19255-f81c-4f70-817b-0450703e4962-images\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.648398 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe9ebbbe-af6e-409d-8039-db5fb66d062b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.648803 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6a19255-f81c-4f70-817b-0450703e4962-proxy-tls\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.648954 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.649030 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkcfx\" (UniqueName: \"kubernetes.io/projected/dd247d04-6c97-4819-a31d-f1f7eb95d40a-kube-api-access-lkcfx\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.693577 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd"] Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.697028 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" event={"ID":"f6553555-ed45-445a-a12a-c332f7d8ac0e","Type":"ContainerStarted","Data":"18296ad090c4abc1133a7096ac6f39aa0015efad5e56f8323ea0cef1df7f460a"} Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.698106 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" event={"ID":"87128957-2efe-44be-bfa2-a5dc8a251453","Type":"ContainerStarted","Data":"2fe73ea9a296029d04403a3b04c2d70c168f4dc8639ea29c0f492b01e675210c"} Jan 29 09:08:59 crc kubenswrapper[4771]: W0129 09:08:59.710039 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55f07f08_7620_440d_81b8_39fdb35d84a3.slice/crio-2720d6c1a629a441023a2b3e69724bf5ce11ffe462018963b6e804c1152935b6 WatchSource:0}: Error finding container 2720d6c1a629a441023a2b3e69724bf5ce11ffe462018963b6e804c1152935b6: Status 404 returned error can't find the container with id 2720d6c1a629a441023a2b3e69724bf5ce11ffe462018963b6e804c1152935b6 Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.750661 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.750904 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-bound-sa-token\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.750941 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv757\" (UniqueName: \"kubernetes.io/projected/a40a655e-56fc-4578-8dd9-6ae371433ea0-kube-api-access-tv757\") pod \"downloads-7954f5f757-kxgbp\" (UID: \"a40a655e-56fc-4578-8dd9-6ae371433ea0\") " pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.750970 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gljnf\" (UniqueName: \"kubernetes.io/projected/72d00e01-7d77-4404-ab20-dcccc7764b69-kube-api-access-gljnf\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.750996 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/796d2283-9389-48d3-9e0a-e71cf4f58ce1-trusted-ca\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751025 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-serving-cert\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751049 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbz72\" (UniqueName: \"kubernetes.io/projected/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-kube-api-access-fbz72\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk7dc\" (UniqueName: \"kubernetes.io/projected/e6a19255-f81c-4f70-817b-0450703e4962-kube-api-access-kk7dc\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751096 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a93775b-af07-44d2-b075-59838e7a8920-apiservice-cert\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751130 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn7z9\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-kube-api-access-gn7z9\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751149 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd247d04-6c97-4819-a31d-f1f7eb95d40a-trusted-ca\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751165 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-node-bootstrap-token\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751184 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-certificates\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751202 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnrg\" (UniqueName: \"kubernetes.io/projected/2fd142c7-125b-41ad-a645-c1eac4caa96b-kube-api-access-mfnrg\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd247d04-6c97-4819-a31d-f1f7eb95d40a-serving-cert\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751250 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgsdp\" (UniqueName: \"kubernetes.io/projected/7e132aa6-1055-48c5-9a8a-086f008c7a70-kube-api-access-cgsdp\") pod \"dns-operator-744455d44c-mbqr8\" (UID: \"7e132aa6-1055-48c5-9a8a-086f008c7a70\") " pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751268 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9pcz\" (UniqueName: \"kubernetes.io/projected/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-kube-api-access-g9pcz\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751287 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0e399b2-7666-44a6-b886-5d941a985630-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751303 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2a126bb5-11e1-49ee-8270-687373b56481-cert\") pod \"ingress-canary-cpgl8\" (UID: \"2a126bb5-11e1-49ee-8270-687373b56481\") " pod="openshift-ingress-canary/ingress-canary-cpgl8" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751320 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c98k6\" (UniqueName: \"kubernetes.io/projected/bafa5883-800f-430b-8ac9-580ad34cc571-kube-api-access-c98k6\") pod \"package-server-manager-789f6589d5-pdvmd\" (UID: \"bafa5883-800f-430b-8ac9-580ad34cc571\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cda2f63-799c-4e05-894d-c0fe721cf974-config-volume\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751365 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a93775b-af07-44d2-b075-59838e7a8920-tmpfs\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751384 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-oauth-serving-cert\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751401 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7afa746d-e424-4032-9e51-35cf653e50ac-srv-cert\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751419 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp54h\" (UniqueName: \"kubernetes.io/projected/872b5467-e306-44a0-b142-baea0c170180-kube-api-access-dp54h\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751440 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-metrics-certs\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751458 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a19255-f81c-4f70-817b-0450703e4962-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751475 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751492 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0e399b2-7666-44a6-b886-5d941a985630-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751512 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z92lp\" (UniqueName: \"kubernetes.io/projected/8a93775b-af07-44d2-b075-59838e7a8920-kube-api-access-z92lp\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751528 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4dh\" (UniqueName: \"kubernetes.io/projected/7afa746d-e424-4032-9e51-35cf653e50ac-kube-api-access-7n4dh\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751545 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhckb\" (UniqueName: \"kubernetes.io/projected/2a126bb5-11e1-49ee-8270-687373b56481-kube-api-access-vhckb\") pod \"ingress-canary-cpgl8\" (UID: \"2a126bb5-11e1-49ee-8270-687373b56481\") " pod="openshift-ingress-canary/ingress-canary-cpgl8" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751566 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef9891b-0ec1-4469-8701-57d0831b4046-config\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751585 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-tls\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751604 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef9891b-0ec1-4469-8701-57d0831b4046-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751622 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nd6\" (UniqueName: \"kubernetes.io/projected/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-kube-api-access-27nd6\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751641 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-trusted-ca\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751661 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bda7c1e-4453-4f83-8b3d-c8118debe332-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751679 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751714 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-config\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95rq\" (UniqueName: \"kubernetes.io/projected/8cda2f63-799c-4e05-894d-c0fe721cf974-kube-api-access-m95rq\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751761 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10c85b6c-532f-4691-8e70-4abc2c4f668c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751787 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-config\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751805 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-config\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.751961 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe9ebbbe-af6e-409d-8039-db5fb66d062b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752030 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab0e857c-3b75-4351-82e1-5faeaf0317be-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752070 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkcfx\" (UniqueName: \"kubernetes.io/projected/dd247d04-6c97-4819-a31d-f1f7eb95d40a-kube-api-access-lkcfx\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a93775b-af07-44d2-b075-59838e7a8920-webhook-cert\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752135 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-config\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752154 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e132aa6-1055-48c5-9a8a-086f008c7a70-metrics-tls\") pod \"dns-operator-744455d44c-mbqr8\" (UID: \"7e132aa6-1055-48c5-9a8a-086f008c7a70\") " pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" Jan 29 09:08:59 crc kubenswrapper[4771]: E0129 09:08:59.752185 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.25215478 +0000 UTC m=+160.374995007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752250 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdv5j\" (UniqueName: \"kubernetes.io/projected/f92b62e7-3351-4e65-a49d-49b6a6217796-kube-api-access-jdv5j\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752302 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/796d2283-9389-48d3-9e0a-e71cf4f58ce1-metrics-tls\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752329 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9e54e5c2-f953-4249-9586-56c08b2e7631-signing-cabundle\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752353 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cda2f63-799c-4e05-894d-c0fe721cf974-secret-volume\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752377 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd247d04-6c97-4819-a31d-f1f7eb95d40a-config\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752397 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrn5\" (UniqueName: \"kubernetes.io/projected/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-kube-api-access-wcrn5\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752433 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-serving-cert\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752460 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9e54e5c2-f953-4249-9586-56c08b2e7631-signing-key\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752486 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0e399b2-7666-44a6-b886-5d941a985630-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752529 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a2442c-28f9-4390-bd1a-06a7542576a8-serving-cert\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752549 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872b5467-e306-44a0-b142-baea0c170180-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-plugins-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752602 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-default-certificate\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752626 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/796d2283-9389-48d3-9e0a-e71cf4f58ce1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752644 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-registration-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752661 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-oauth-config\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752680 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cdgh\" (UniqueName: \"kubernetes.io/projected/bf5607c1-6059-4a83-b10b-28de3d1b872a-kube-api-access-7cdgh\") pod \"migrator-59844c95c7-28dgh\" (UID: \"bf5607c1-6059-4a83-b10b-28de3d1b872a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752728 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752756 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8sj\" (UniqueName: \"kubernetes.io/projected/8bda7c1e-4453-4f83-8b3d-c8118debe332-kube-api-access-mf8sj\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752783 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7grhm\" (UniqueName: \"kubernetes.io/projected/34f0263f-c771-4ef0-91be-9d37f9ba6d60-kube-api-access-7grhm\") pod \"control-plane-machine-set-operator-78cbb6b69f-qvr68\" (UID: \"34f0263f-c771-4ef0-91be-9d37f9ba6d60\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752801 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-stats-auth\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752821 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-trusted-ca-bundle\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752877 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42554\" (UniqueName: \"kubernetes.io/projected/9e54e5c2-f953-4249-9586-56c08b2e7631-kube-api-access-42554\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752910 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752934 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss9mb\" (UniqueName: \"kubernetes.io/projected/39ef9676-ef1e-4410-a82a-3045b2986e88-kube-api-access-ss9mb\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.752967 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-oauth-serving-cert\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.754025 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a19255-f81c-4f70-817b-0450703e4962-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.754440 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd247d04-6c97-4819-a31d-f1f7eb95d40a-trusted-ca\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.755073 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef9891b-0ec1-4469-8701-57d0831b4046-config\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.755731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34f0263f-c771-4ef0-91be-9d37f9ba6d60-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qvr68\" (UID: \"34f0263f-c771-4ef0-91be-9d37f9ba6d60\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.755781 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-srv-cert\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.755827 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.755855 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab0e857c-3b75-4351-82e1-5faeaf0317be-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.755886 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c85b6c-532f-4691-8e70-4abc2c4f668c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.755937 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxhk\" (UniqueName: \"kubernetes.io/projected/ab0e857c-3b75-4351-82e1-5faeaf0317be-kube-api-access-qmxhk\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.755977 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872b5467-e306-44a0-b142-baea0c170180-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72d00e01-7d77-4404-ab20-dcccc7764b69-service-ca-bundle\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756056 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79jh\" (UniqueName: \"kubernetes.io/projected/2fa1d696-380f-4007-99d9-2853d03c5782-kube-api-access-l79jh\") pod \"multus-admission-controller-857f4d67dd-wszgv\" (UID: \"2fa1d696-380f-4007-99d9-2853d03c5782\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756078 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10c85b6c-532f-4691-8e70-4abc2c4f668c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756115 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-metrics-tls\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756145 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bda7c1e-4453-4f83-8b3d-c8118debe332-proxy-tls\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756169 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-ca\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756197 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-service-ca\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756231 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bafa5883-800f-430b-8ac9-580ad34cc571-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdvmd\" (UID: \"bafa5883-800f-430b-8ac9-580ad34cc571\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756289 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe9ebbbe-af6e-409d-8039-db5fb66d062b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ef9891b-0ec1-4469-8701-57d0831b4046-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756358 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9tpb\" (UniqueName: \"kubernetes.io/projected/796d2283-9389-48d3-9e0a-e71cf4f58ce1-kube-api-access-j9tpb\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756387 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6a19255-f81c-4f70-817b-0450703e4962-images\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756420 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-certs\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756457 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-config-volume\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756482 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t8qg\" (UniqueName: \"kubernetes.io/projected/d0e399b2-7666-44a6-b886-5d941a985630-kube-api-access-6t8qg\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756570 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6a19255-f81c-4f70-817b-0450703e4962-proxy-tls\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756597 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-client\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756621 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2fa1d696-380f-4007-99d9-2853d03c5782-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wszgv\" (UID: \"2fa1d696-380f-4007-99d9-2853d03c5782\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756653 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756678 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-mountpoint-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756740 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-socket-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756769 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7afa746d-e424-4032-9e51-35cf653e50ac-profile-collector-cert\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756807 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k899g\" (UniqueName: \"kubernetes.io/projected/30a2442c-28f9-4390-bd1a-06a7542576a8-kube-api-access-k899g\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756834 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-csi-data-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.756884 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-service-ca\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.757085 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd247d04-6c97-4819-a31d-f1f7eb95d40a-config\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.757259 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9e54e5c2-f953-4249-9586-56c08b2e7631-signing-cabundle\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.757398 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe9ebbbe-af6e-409d-8039-db5fb66d062b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: E0129 09:08:59.757430 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.257412779 +0000 UTC m=+160.380253006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.757588 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/796d2283-9389-48d3-9e0a-e71cf4f58ce1-trusted-ca\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.758156 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6a19255-f81c-4f70-817b-0450703e4962-images\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.758483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-config\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.758794 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-certificates\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.761648 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-tls\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.761716 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef9891b-0ec1-4469-8701-57d0831b4046-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.762089 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6a19255-f81c-4f70-817b-0450703e4962-proxy-tls\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.762594 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-service-ca\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.763455 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd247d04-6c97-4819-a31d-f1f7eb95d40a-serving-cert\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.765158 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-config\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.768293 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-trusted-ca-bundle\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.768466 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-oauth-config\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.787234 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-serving-cert\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.810465 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9e54e5c2-f953-4249-9586-56c08b2e7631-signing-key\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.813139 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/796d2283-9389-48d3-9e0a-e71cf4f58ce1-metrics-tls\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.813757 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe9ebbbe-af6e-409d-8039-db5fb66d062b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.814008 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ztsgd"] Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.820441 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn7z9\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-kube-api-access-gn7z9\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.826153 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.827160 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkcfx\" (UniqueName: \"kubernetes.io/projected/dd247d04-6c97-4819-a31d-f1f7eb95d40a-kube-api-access-lkcfx\") pod \"console-operator-58897d9998-5xjmf\" (UID: \"dd247d04-6c97-4819-a31d-f1f7eb95d40a\") " pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.827910 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-trusted-ca\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: W0129 09:08:59.844654 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47cda8f8_ad10_4898_a066_1c388df82ab4.slice/crio-eb6a6d855a53533f95e76746ab9ac1146804951752e938cfab4d017aa35ea0e0 WatchSource:0}: Error finding container eb6a6d855a53533f95e76746ab9ac1146804951752e938cfab4d017aa35ea0e0: Status 404 returned error can't find the container with id eb6a6d855a53533f95e76746ab9ac1146804951752e938cfab4d017aa35ea0e0 Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.854453 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-bound-sa-token\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859072 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:08:59 crc kubenswrapper[4771]: E0129 09:08:59.859254 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.359221974 +0000 UTC m=+160.482062201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859307 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27nd6\" (UniqueName: \"kubernetes.io/projected/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-kube-api-access-27nd6\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859348 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bda7c1e-4453-4f83-8b3d-c8118debe332-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859371 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859407 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95rq\" (UniqueName: \"kubernetes.io/projected/8cda2f63-799c-4e05-894d-c0fe721cf974-kube-api-access-m95rq\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859435 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10c85b6c-532f-4691-8e70-4abc2c4f668c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859461 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-config\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859490 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-config\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859513 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab0e857c-3b75-4351-82e1-5faeaf0317be-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859561 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e132aa6-1055-48c5-9a8a-086f008c7a70-metrics-tls\") pod \"dns-operator-744455d44c-mbqr8\" (UID: \"7e132aa6-1055-48c5-9a8a-086f008c7a70\") " pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859584 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a93775b-af07-44d2-b075-59838e7a8920-webhook-cert\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859605 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdv5j\" (UniqueName: \"kubernetes.io/projected/f92b62e7-3351-4e65-a49d-49b6a6217796-kube-api-access-jdv5j\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859637 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cda2f63-799c-4e05-894d-c0fe721cf974-secret-volume\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859661 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrn5\" (UniqueName: \"kubernetes.io/projected/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-kube-api-access-wcrn5\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859716 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0e399b2-7666-44a6-b886-5d941a985630-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a2442c-28f9-4390-bd1a-06a7542576a8-serving-cert\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859760 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872b5467-e306-44a0-b142-baea0c170180-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-registration-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859820 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-plugins-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859843 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-default-certificate\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859881 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859912 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8sj\" (UniqueName: \"kubernetes.io/projected/8bda7c1e-4453-4f83-8b3d-c8118debe332-kube-api-access-mf8sj\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859951 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7grhm\" (UniqueName: \"kubernetes.io/projected/34f0263f-c771-4ef0-91be-9d37f9ba6d60-kube-api-access-7grhm\") pod \"control-plane-machine-set-operator-78cbb6b69f-qvr68\" (UID: \"34f0263f-c771-4ef0-91be-9d37f9ba6d60\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.859982 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-stats-auth\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860016 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860035 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-srv-cert\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860057 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss9mb\" (UniqueName: \"kubernetes.io/projected/39ef9676-ef1e-4410-a82a-3045b2986e88-kube-api-access-ss9mb\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860075 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34f0263f-c771-4ef0-91be-9d37f9ba6d60-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qvr68\" (UID: \"34f0263f-c771-4ef0-91be-9d37f9ba6d60\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860100 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab0e857c-3b75-4351-82e1-5faeaf0317be-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860116 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c85b6c-532f-4691-8e70-4abc2c4f668c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860145 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmxhk\" (UniqueName: \"kubernetes.io/projected/ab0e857c-3b75-4351-82e1-5faeaf0317be-kube-api-access-qmxhk\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860164 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872b5467-e306-44a0-b142-baea0c170180-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860181 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72d00e01-7d77-4404-ab20-dcccc7764b69-service-ca-bundle\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860210 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l79jh\" (UniqueName: \"kubernetes.io/projected/2fa1d696-380f-4007-99d9-2853d03c5782-kube-api-access-l79jh\") pod \"multus-admission-controller-857f4d67dd-wszgv\" (UID: \"2fa1d696-380f-4007-99d9-2853d03c5782\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860236 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10c85b6c-532f-4691-8e70-4abc2c4f668c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860261 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-metrics-tls\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860285 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bafa5883-800f-430b-8ac9-580ad34cc571-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdvmd\" (UID: \"bafa5883-800f-430b-8ac9-580ad34cc571\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860314 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bda7c1e-4453-4f83-8b3d-c8118debe332-proxy-tls\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860329 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-ca\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860346 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-service-ca\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860372 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-certs\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860391 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-config-volume\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860408 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t8qg\" (UniqueName: \"kubernetes.io/projected/d0e399b2-7666-44a6-b886-5d941a985630-kube-api-access-6t8qg\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860430 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-client\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860440 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-plugins-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860456 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2fa1d696-380f-4007-99d9-2853d03c5782-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wszgv\" (UID: \"2fa1d696-380f-4007-99d9-2853d03c5782\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860479 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-mountpoint-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860498 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-socket-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860526 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7afa746d-e424-4032-9e51-35cf653e50ac-profile-collector-cert\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860549 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-csi-data-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860566 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k899g\" (UniqueName: \"kubernetes.io/projected/30a2442c-28f9-4390-bd1a-06a7542576a8-kube-api-access-k899g\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860591 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv757\" (UniqueName: \"kubernetes.io/projected/a40a655e-56fc-4578-8dd9-6ae371433ea0-kube-api-access-tv757\") pod \"downloads-7954f5f757-kxgbp\" (UID: \"a40a655e-56fc-4578-8dd9-6ae371433ea0\") " pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860607 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gljnf\" (UniqueName: \"kubernetes.io/projected/72d00e01-7d77-4404-ab20-dcccc7764b69-kube-api-access-gljnf\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860623 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-serving-cert\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbz72\" (UniqueName: \"kubernetes.io/projected/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-kube-api-access-fbz72\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860705 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a93775b-af07-44d2-b075-59838e7a8920-apiservice-cert\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860724 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-node-bootstrap-token\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860733 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bda7c1e-4453-4f83-8b3d-c8118debe332-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860783 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgsdp\" (UniqueName: \"kubernetes.io/projected/7e132aa6-1055-48c5-9a8a-086f008c7a70-kube-api-access-cgsdp\") pod \"dns-operator-744455d44c-mbqr8\" (UID: \"7e132aa6-1055-48c5-9a8a-086f008c7a70\") " pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860805 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9pcz\" (UniqueName: \"kubernetes.io/projected/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-kube-api-access-g9pcz\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860823 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0e399b2-7666-44a6-b886-5d941a985630-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860889 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cda2f63-799c-4e05-894d-c0fe721cf974-config-volume\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860906 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a93775b-af07-44d2-b075-59838e7a8920-tmpfs\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860924 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2a126bb5-11e1-49ee-8270-687373b56481-cert\") pod \"ingress-canary-cpgl8\" (UID: \"2a126bb5-11e1-49ee-8270-687373b56481\") " pod="openshift-ingress-canary/ingress-canary-cpgl8" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c98k6\" (UniqueName: \"kubernetes.io/projected/bafa5883-800f-430b-8ac9-580ad34cc571-kube-api-access-c98k6\") pod \"package-server-manager-789f6589d5-pdvmd\" (UID: \"bafa5883-800f-430b-8ac9-580ad34cc571\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860961 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7afa746d-e424-4032-9e51-35cf653e50ac-srv-cert\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860979 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp54h\" (UniqueName: \"kubernetes.io/projected/872b5467-e306-44a0-b142-baea0c170180-kube-api-access-dp54h\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.860998 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-metrics-certs\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.861014 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.861034 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0e399b2-7666-44a6-b886-5d941a985630-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.861054 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z92lp\" (UniqueName: \"kubernetes.io/projected/8a93775b-af07-44d2-b075-59838e7a8920-kube-api-access-z92lp\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.861071 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4dh\" (UniqueName: \"kubernetes.io/projected/7afa746d-e424-4032-9e51-35cf653e50ac-kube-api-access-7n4dh\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.861088 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhckb\" (UniqueName: \"kubernetes.io/projected/2a126bb5-11e1-49ee-8270-687373b56481-kube-api-access-vhckb\") pod \"ingress-canary-cpgl8\" (UID: \"2a126bb5-11e1-49ee-8270-687373b56481\") " pod="openshift-ingress-canary/ingress-canary-cpgl8" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.863723 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cda2f63-799c-4e05-894d-c0fe721cf974-config-volume\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.864009 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-config\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.864110 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a93775b-af07-44d2-b075-59838e7a8920-tmpfs\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.864672 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0e399b2-7666-44a6-b886-5d941a985630-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.865071 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-config\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.865225 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-socket-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.865838 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-ca\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.866304 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab0e857c-3b75-4351-82e1-5faeaf0317be-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.866318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-service-ca\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.871017 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42554\" (UniqueName: \"kubernetes.io/projected/9e54e5c2-f953-4249-9586-56c08b2e7631-kube-api-access-42554\") pod \"service-ca-9c57cc56f-c9vbh\" (UID: \"9e54e5c2-f953-4249-9586-56c08b2e7631\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.871201 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-registration-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.872397 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-node-bootstrap-token\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.873444 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72d00e01-7d77-4404-ab20-dcccc7764b69-service-ca-bundle\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.873841 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-config-volume\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.874186 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c85b6c-532f-4691-8e70-4abc2c4f668c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.874306 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.875314 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872b5467-e306-44a0-b142-baea0c170180-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.874199 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-csi-data-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.876221 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2a126bb5-11e1-49ee-8270-687373b56481-cert\") pod \"ingress-canary-cpgl8\" (UID: \"2a126bb5-11e1-49ee-8270-687373b56481\") " pod="openshift-ingress-canary/ingress-canary-cpgl8" Jan 29 09:08:59 crc kubenswrapper[4771]: E0129 09:08:59.877135 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.377111191 +0000 UTC m=+160.499951418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.878074 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/39ef9676-ef1e-4410-a82a-3045b2986e88-mountpoint-dir\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.878373 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.879345 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a93775b-af07-44d2-b075-59838e7a8920-webhook-cert\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.879621 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.879633 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10c85b6c-532f-4691-8e70-4abc2c4f668c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.879800 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-default-certificate\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.885581 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7afa746d-e424-4032-9e51-35cf653e50ac-profile-collector-cert\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.898373 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bafa5883-800f-430b-8ac9-580ad34cc571-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdvmd\" (UID: \"bafa5883-800f-430b-8ac9-580ad34cc571\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.901441 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-srv-cert\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.901527 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-certs\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.901651 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0e399b2-7666-44a6-b886-5d941a985630-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.901812 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-metrics-certs\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.902659 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7afa746d-e424-4032-9e51-35cf653e50ac-srv-cert\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.905684 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cda2f63-799c-4e05-894d-c0fe721cf974-secret-volume\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.906758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e132aa6-1055-48c5-9a8a-086f008c7a70-metrics-tls\") pod \"dns-operator-744455d44c-mbqr8\" (UID: \"7e132aa6-1055-48c5-9a8a-086f008c7a70\") " pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.908031 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bda7c1e-4453-4f83-8b3d-c8118debe332-proxy-tls\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.908177 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2fa1d696-380f-4007-99d9-2853d03c5782-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wszgv\" (UID: \"2fa1d696-380f-4007-99d9-2853d03c5782\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.908328 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a2442c-28f9-4390-bd1a-06a7542576a8-serving-cert\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.908909 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab0e857c-3b75-4351-82e1-5faeaf0317be-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.910157 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872b5467-e306-44a0-b142-baea0c170180-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.910897 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ef9891b-0ec1-4469-8701-57d0831b4046-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wp8pt\" (UID: \"2ef9891b-0ec1-4469-8701-57d0831b4046\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.911474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30a2442c-28f9-4390-bd1a-06a7542576a8-etcd-client\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.911752 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34f0263f-c771-4ef0-91be-9d37f9ba6d60-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qvr68\" (UID: \"34f0263f-c771-4ef0-91be-9d37f9ba6d60\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.912079 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/72d00e01-7d77-4404-ab20-dcccc7764b69-stats-auth\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.912447 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-metrics-tls\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.912539 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-serving-cert\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.913245 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a93775b-af07-44d2-b075-59838e7a8920-apiservice-cert\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.916305 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9tpb\" (UniqueName: \"kubernetes.io/projected/796d2283-9389-48d3-9e0a-e71cf4f58ce1-kube-api-access-j9tpb\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.931887 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk7dc\" (UniqueName: \"kubernetes.io/projected/e6a19255-f81c-4f70-817b-0450703e4962-kube-api-access-kk7dc\") pod \"machine-config-operator-74547568cd-7r6g2\" (UID: \"e6a19255-f81c-4f70-817b-0450703e4962\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.946349 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz"] Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.948913 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2370fdf0-204b-4cbe-a001-8bf7193fb0dd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4xvsw\" (UID: \"2370fdf0-204b-4cbe-a001-8bf7193fb0dd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.953493 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnrg\" (UniqueName: \"kubernetes.io/projected/2fd142c7-125b-41ad-a645-c1eac4caa96b-kube-api-access-mfnrg\") pod \"console-f9d7485db-jzc5h\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.959485 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.964418 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:08:59 crc kubenswrapper[4771]: E0129 09:08:59.964768 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.464677772 +0000 UTC m=+160.587517999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.966484 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:08:59 crc kubenswrapper[4771]: E0129 09:08:59.966912 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.466892315 +0000 UTC m=+160.589732542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.969108 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rsp6c"] Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.971336 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2"] Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.973506 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.973525 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/796d2283-9389-48d3-9e0a-e71cf4f58ce1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8hkbq\" (UID: \"796d2283-9389-48d3-9e0a-e71cf4f58ce1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:08:59 crc kubenswrapper[4771]: I0129 09:08:59.995042 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.005065 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.005450 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cdgh\" (UniqueName: \"kubernetes.io/projected/bf5607c1-6059-4a83-b10b-28de3d1b872a-kube-api-access-7cdgh\") pod \"migrator-59844c95c7-28dgh\" (UID: \"bf5607c1-6059-4a83-b10b-28de3d1b872a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.017068 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.024721 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.032793 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.038644 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.039861 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27nd6\" (UniqueName: \"kubernetes.io/projected/7afa89b1-e90b-4b0c-961e-73f964e2c4f2-kube-api-access-27nd6\") pod \"service-ca-operator-777779d784-2jhkf\" (UID: \"7afa89b1-e90b-4b0c-961e-73f964e2c4f2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.062152 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ld9n"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.063406 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95rq\" (UniqueName: \"kubernetes.io/projected/8cda2f63-799c-4e05-894d-c0fe721cf974-kube-api-access-m95rq\") pod \"collect-profiles-29494620-rg7xz\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.067025 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.067594 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.567552507 +0000 UTC m=+160.690392734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.068059 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.068522 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.568512504 +0000 UTC m=+160.691352731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.085336 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhckb\" (UniqueName: \"kubernetes.io/projected/2a126bb5-11e1-49ee-8270-687373b56481-kube-api-access-vhckb\") pod \"ingress-canary-cpgl8\" (UID: \"2a126bb5-11e1-49ee-8270-687373b56481\") " pod="openshift-ingress-canary/ingress-canary-cpgl8" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.101317 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c98k6\" (UniqueName: \"kubernetes.io/projected/bafa5883-800f-430b-8ac9-580ad34cc571-kube-api-access-c98k6\") pod \"package-server-manager-789f6589d5-pdvmd\" (UID: \"bafa5883-800f-430b-8ac9-580ad34cc571\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.104715 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.109341 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9qbzj"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.121066 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgsdp\" (UniqueName: \"kubernetes.io/projected/7e132aa6-1055-48c5-9a8a-086f008c7a70-kube-api-access-cgsdp\") pod \"dns-operator-744455d44c-mbqr8\" (UID: \"7e132aa6-1055-48c5-9a8a-086f008c7a70\") " pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.126112 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.151105 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9pcz\" (UniqueName: \"kubernetes.io/projected/3d60aa57-7b9b-43eb-a44d-704c92ce6a57-kube-api-access-g9pcz\") pod \"olm-operator-6b444d44fb-xlvgv\" (UID: \"3d60aa57-7b9b-43eb-a44d-704c92ce6a57\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.163902 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0e399b2-7666-44a6-b886-5d941a985630-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.170391 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.171031 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.671007439 +0000 UTC m=+160.793847666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.192682 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.194237 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdv5j\" (UniqueName: \"kubernetes.io/projected/f92b62e7-3351-4e65-a49d-49b6a6217796-kube-api-access-jdv5j\") pod \"marketplace-operator-79b997595-nqgdw\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.198104 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z92lp\" (UniqueName: \"kubernetes.io/projected/8a93775b-af07-44d2-b075-59838e7a8920-kube-api-access-z92lp\") pod \"packageserver-d55dfcdfc-bgssx\" (UID: \"8a93775b-af07-44d2-b075-59838e7a8920\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.223418 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4dh\" (UniqueName: \"kubernetes.io/projected/7afa746d-e424-4032-9e51-35cf653e50ac-kube-api-access-7n4dh\") pod \"catalog-operator-68c6474976-2dm8r\" (UID: \"7afa746d-e424-4032-9e51-35cf653e50ac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.229292 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.241274 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cpgl8" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.241626 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrn5\" (UniqueName: \"kubernetes.io/projected/ed15f83f-f5b1-4cb8-8414-b143435f9eb4-kube-api-access-wcrn5\") pod \"dns-default-w8f4k\" (UID: \"ed15f83f-f5b1-4cb8-8414-b143435f9eb4\") " pod="openshift-dns/dns-default-w8f4k" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.257440 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp54h\" (UniqueName: \"kubernetes.io/projected/872b5467-e306-44a0-b142-baea0c170180-kube-api-access-dp54h\") pod \"kube-storage-version-migrator-operator-b67b599dd-9gkcl\" (UID: \"872b5467-e306-44a0-b142-baea0c170180\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.287538 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w8f4k" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.288279 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.288669 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.788654822 +0000 UTC m=+160.911495049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.301686 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv757\" (UniqueName: \"kubernetes.io/projected/a40a655e-56fc-4578-8dd9-6ae371433ea0-kube-api-access-tv757\") pod \"downloads-7954f5f757-kxgbp\" (UID: \"a40a655e-56fc-4578-8dd9-6ae371433ea0\") " pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.313268 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k899g\" (UniqueName: \"kubernetes.io/projected/30a2442c-28f9-4390-bd1a-06a7542576a8-kube-api-access-k899g\") pod \"etcd-operator-b45778765-bdcnw\" (UID: \"30a2442c-28f9-4390-bd1a-06a7542576a8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.314573 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.320027 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gljnf\" (UniqueName: \"kubernetes.io/projected/72d00e01-7d77-4404-ab20-dcccc7764b69-kube-api-access-gljnf\") pod \"router-default-5444994796-95rng\" (UID: \"72d00e01-7d77-4404-ab20-dcccc7764b69\") " pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.329751 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jzc5h"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.343282 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmxhk\" (UniqueName: \"kubernetes.io/projected/ab0e857c-3b75-4351-82e1-5faeaf0317be-kube-api-access-qmxhk\") pod \"openshift-controller-manager-operator-756b6f6bc6-clpx4\" (UID: \"ab0e857c-3b75-4351-82e1-5faeaf0317be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.347464 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.366592 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.372892 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.378248 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10c85b6c-532f-4691-8e70-4abc2c4f668c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b6l78\" (UID: \"10c85b6c-532f-4691-8e70-4abc2c4f668c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.378901 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l79jh\" (UniqueName: \"kubernetes.io/projected/2fa1d696-380f-4007-99d9-2853d03c5782-kube-api-access-l79jh\") pod \"multus-admission-controller-857f4d67dd-wszgv\" (UID: \"2fa1d696-380f-4007-99d9-2853d03c5782\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.379073 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.393074 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.393125 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nn2z2"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.393974 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.394643 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.395300 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.895257022 +0000 UTC m=+161.018097429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.401383 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss9mb\" (UniqueName: \"kubernetes.io/projected/39ef9676-ef1e-4410-a82a-3045b2986e88-kube-api-access-ss9mb\") pod \"csi-hostpathplugin-h84vs\" (UID: \"39ef9676-ef1e-4410-a82a-3045b2986e88\") " pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.414029 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.420192 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7grhm\" (UniqueName: \"kubernetes.io/projected/34f0263f-c771-4ef0-91be-9d37f9ba6d60-kube-api-access-7grhm\") pod \"control-plane-machine-set-operator-78cbb6b69f-qvr68\" (UID: \"34f0263f-c771-4ef0-91be-9d37f9ba6d60\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.433316 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.442600 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8sj\" (UniqueName: \"kubernetes.io/projected/8bda7c1e-4453-4f83-8b3d-c8118debe332-kube-api-access-mf8sj\") pod \"machine-config-controller-84d6567774-stkl2\" (UID: \"8bda7c1e-4453-4f83-8b3d-c8118debe332\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.445541 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.454971 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.466625 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t8qg\" (UniqueName: \"kubernetes.io/projected/d0e399b2-7666-44a6-b886-5d941a985630-kube-api-access-6t8qg\") pod \"cluster-image-registry-operator-dc59b4c8b-v7ntz\" (UID: \"d0e399b2-7666-44a6-b886-5d941a985630\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.474215 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9vbh"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.475351 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.475950 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbz72\" (UniqueName: \"kubernetes.io/projected/053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40-kube-api-access-fbz72\") pod \"machine-config-server-wzrzm\" (UID: \"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40\") " pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.495914 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.496450 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:00.996417678 +0000 UTC m=+161.119257895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.502396 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.507263 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.515085 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.556784 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.572254 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h84vs" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.596711 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.597178 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.097157652 +0000 UTC m=+161.219997869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.654379 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.702672 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.703427 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.203404783 +0000 UTC m=+161.326245020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.720006 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2"] Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.723602 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" event={"ID":"9e54e5c2-f953-4249-9586-56c08b2e7631","Type":"ContainerStarted","Data":"8b4d2b8ae67f36ab3ff9627f790ba1fc6c49ee2826d55d4c29da980e7f8c670f"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.737320 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" event={"ID":"2370fdf0-204b-4cbe-a001-8bf7193fb0dd","Type":"ContainerStarted","Data":"acb6318052c2ab5bc56d185e9f75dc4d31523a6ed1710a73af6814730b44249c"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.742769 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" event={"ID":"2ef9891b-0ec1-4469-8701-57d0831b4046","Type":"ContainerStarted","Data":"19681a60a3f9c1f15ee8cef0e4c80843565f0aa5b1d0156c47db40469daa0b46"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.744180 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" event={"ID":"bfcb03e2-b241-4df4-8295-33b5ad3eae58","Type":"ContainerStarted","Data":"6436fc3f2f672851614970cb95a5a9c620811f0ccf486e29c0a4884331f16f6a"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.745587 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" event={"ID":"acd89578-60c3-4368-9b2c-59dc899d1a08","Type":"ContainerStarted","Data":"d961cda1916671bf511701694fa657db18b7d473e5765a9a6bb44813798f36ee"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.749285 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jzc5h" event={"ID":"2fd142c7-125b-41ad-a645-c1eac4caa96b","Type":"ContainerStarted","Data":"90fc2f0b2324eb31fad1021beab8e30f77b7046e97d0d4d098b4a81eda928290"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.752658 4771 generic.go:334] "Generic (PLEG): container finished" podID="f6553555-ed45-445a-a12a-c332f7d8ac0e" containerID="e43c33fdd22bf0751d1eab9c6a6579e32d6c2b8ba5cfcd76719c8ba818bbe671" exitCode=0 Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.753167 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" event={"ID":"f6553555-ed45-445a-a12a-c332f7d8ac0e","Type":"ContainerDied","Data":"e43c33fdd22bf0751d1eab9c6a6579e32d6c2b8ba5cfcd76719c8ba818bbe671"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.767466 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wzrzm" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.778842 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" event={"ID":"30d901bc-be28-4ddc-b46f-05fffb35ec40","Type":"ContainerStarted","Data":"070c9b8208cc654be6c8f5a072b8a20e0c26a3810879608c664fb12dfd6ef4e0"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.782190 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" event={"ID":"55f07f08-7620-440d-81b8-39fdb35d84a3","Type":"ContainerStarted","Data":"ec1df1c73766744dc77c30ce776491db53f08e823c7aba9750d343bb5ff6cffb"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.782266 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" event={"ID":"55f07f08-7620-440d-81b8-39fdb35d84a3","Type":"ContainerStarted","Data":"2720d6c1a629a441023a2b3e69724bf5ce11ffe462018963b6e804c1152935b6"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.783077 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.788410 4771 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-f9qrd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.788462 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" podUID="55f07f08-7620-440d-81b8-39fdb35d84a3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.801297 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" event={"ID":"83770b88-0e8f-4356-b13e-a4deeb9c8b2a","Type":"ContainerStarted","Data":"0dfe81e18deff68270040d8d81ddfdaff70466c2c10114b00dd821c67dfe6ac3"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.807589 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.808233 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.30812154 +0000 UTC m=+161.430961777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.808675 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.809946 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.309815328 +0000 UTC m=+161.432655555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.826856 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" event={"ID":"fa5d10fe-44e0-4910-aa58-693cd50e8ab4","Type":"ContainerStarted","Data":"010063fffd06bb6174b8caf875f02d75980ce44b8e7fb7afb24fd5005d52e2ab"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.838143 4771 generic.go:334] "Generic (PLEG): container finished" podID="47cda8f8-ad10-4898-a066-1c388df82ab4" containerID="070fabc3721859547a9cc09ea8fb70ffaae0cf1bb7368ce02d6b9d3e1739ce0a" exitCode=0 Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.861321 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" event={"ID":"47cda8f8-ad10-4898-a066-1c388df82ab4","Type":"ContainerDied","Data":"070fabc3721859547a9cc09ea8fb70ffaae0cf1bb7368ce02d6b9d3e1739ce0a"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.861365 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" event={"ID":"47cda8f8-ad10-4898-a066-1c388df82ab4","Type":"ContainerStarted","Data":"eb6a6d855a53533f95e76746ab9ac1146804951752e938cfab4d017aa35ea0e0"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.869620 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" event={"ID":"87128957-2efe-44be-bfa2-a5dc8a251453","Type":"ContainerStarted","Data":"18c9b6652a43bd364ae895a086b2f2775fb67e1890231478ab6ad67ca2e42290"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.888582 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" event={"ID":"13594924-bb90-4488-84c7-2046b323e219","Type":"ContainerStarted","Data":"29849c0fbba52d5f4d7922cb2266c0cadb92d7ab2d96d1c83accf1df34028034"} Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.911184 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.911616 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.411589932 +0000 UTC m=+161.534430159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:00 crc kubenswrapper[4771]: I0129 09:09:00.911771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:00 crc kubenswrapper[4771]: E0129 09:09:00.914203 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.414193716 +0000 UTC m=+161.537033943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.014497 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.014813 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.514794756 +0000 UTC m=+161.637634983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.014883 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.016380 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.516372131 +0000 UTC m=+161.639212358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.117128 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.117367 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.617324371 +0000 UTC m=+161.740164598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.117573 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.118275 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.618258488 +0000 UTC m=+161.741098715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.218683 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.218960 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.71891738 +0000 UTC m=+161.841757607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.219444 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.219976 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.71996471 +0000 UTC m=+161.842805107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.273089 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh"] Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.290194 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq"] Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.321251 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.321646 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.82160688 +0000 UTC m=+161.944447117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.325000 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.325742 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.825723786 +0000 UTC m=+161.948564013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.434118 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.434846 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:01.934827898 +0000 UTC m=+162.057668125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.536779 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.537762 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.037740054 +0000 UTC m=+162.160580281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.638856 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.639117 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.139100346 +0000 UTC m=+162.261940573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.639168 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.639558 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.139550468 +0000 UTC m=+162.262390695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.741322 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.741521 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.241489777 +0000 UTC m=+162.364330004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.742268 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.742648 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.242630759 +0000 UTC m=+162.365471176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.844000 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.845036 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.34501725 +0000 UTC m=+162.467857477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.853053 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" podStartSLOduration=133.853020667 podStartE2EDuration="2m13.853020667s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:01.852210584 +0000 UTC m=+161.975050831" watchObservedRunningTime="2026-01-29 09:09:01.853020667 +0000 UTC m=+161.975860894" Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.900197 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" podStartSLOduration=132.900173353 podStartE2EDuration="2m12.900173353s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:01.898893157 +0000 UTC m=+162.021733394" watchObservedRunningTime="2026-01-29 09:09:01.900173353 +0000 UTC m=+162.023013580" Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.946659 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:01 crc kubenswrapper[4771]: E0129 09:09:01.947124 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.447107873 +0000 UTC m=+162.569948100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.953929 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" event={"ID":"fa5d10fe-44e0-4910-aa58-693cd50e8ab4","Type":"ContainerStarted","Data":"11283132b43d02e1e9cc49bece549390e91d142d7871c4490b07c0fd99657582"} Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.983164 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" event={"ID":"f6553555-ed45-445a-a12a-c332f7d8ac0e","Type":"ContainerStarted","Data":"f6d127ff49f823ee7733aa458bf6d957638d43a3d9dea2efc472e423762d9b51"} Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.983925 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:09:01 crc kubenswrapper[4771]: I0129 09:09:01.984419 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" event={"ID":"796d2283-9389-48d3-9e0a-e71cf4f58ce1","Type":"ContainerStarted","Data":"bf76cd7c06de9b8c4fd8f7c40459281c1a8a24e5ecbf1a7c98bdaaeeb7dd6198"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.018968 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5nllz" podStartSLOduration=134.018934008 podStartE2EDuration="2m14.018934008s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:02.009843501 +0000 UTC m=+162.132683728" watchObservedRunningTime="2026-01-29 09:09:02.018934008 +0000 UTC m=+162.141774235" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.047964 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cpgl8"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.048070 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92tqz" event={"ID":"87128957-2efe-44be-bfa2-a5dc8a251453","Type":"ContainerStarted","Data":"4a5bc259f1b721e66ce7b1951782fb3e0f8ba2adbf0208db789494ad9c7bdce3"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.048577 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.048810 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.548784624 +0000 UTC m=+162.671624851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.049073 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.051091 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.551071319 +0000 UTC m=+162.673911556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.083068 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" event={"ID":"9e54e5c2-f953-4249-9586-56c08b2e7631","Type":"ContainerStarted","Data":"83600c14d9a875a7b7cfa6f6e94791237aab8dc258206b7622bef2b42b36fe2d"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.097012 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" event={"ID":"7cc18dde-121c-4f72-a2a1-25c7a5559a5a","Type":"ContainerStarted","Data":"dfe03c9c97ed88aa08019b085cc9fbcde3cb5d0b57a4c4bd15fed7d9a7290cb5"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.107936 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jzc5h" event={"ID":"2fd142c7-125b-41ad-a645-c1eac4caa96b","Type":"ContainerStarted","Data":"2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.113149 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" event={"ID":"13594924-bb90-4488-84c7-2046b323e219","Type":"ContainerStarted","Data":"ce309585cf348b37de36222638556dc5762cd6fd92c5cb3bbc5b3eef47d21cbe"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.113707 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" podStartSLOduration=134.113659082 podStartE2EDuration="2m14.113659082s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:02.094506449 +0000 UTC m=+162.217346676" watchObservedRunningTime="2026-01-29 09:09:02.113659082 +0000 UTC m=+162.236499309" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.115020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wzrzm" event={"ID":"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40","Type":"ContainerStarted","Data":"4a5f5aa6f0862637a92a3f0ee1a92b960224b0dd24ed5a2ed305c69d8d3db10f"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.116606 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-95rng" event={"ID":"72d00e01-7d77-4404-ab20-dcccc7764b69","Type":"ContainerStarted","Data":"9e3b7867604e0228175e8292054540f08f7721dd168c4db6e084b145d706dffb"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.117662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" event={"ID":"bf5607c1-6059-4a83-b10b-28de3d1b872a","Type":"ContainerStarted","Data":"e28244decde248d66089f9da68ba4c8418fb7a8902dc7ce1add77dcc8dca0933"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.124980 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-c9vbh" podStartSLOduration=133.124949912 podStartE2EDuration="2m13.124949912s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:02.120204278 +0000 UTC m=+162.243044505" watchObservedRunningTime="2026-01-29 09:09:02.124949912 +0000 UTC m=+162.247790139" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.126046 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" event={"ID":"e6a19255-f81c-4f70-817b-0450703e4962","Type":"ContainerStarted","Data":"0a133e124bb3af9f166e3476698e59cac0370794752e6306f53f82c469bb1fbf"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.136860 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" event={"ID":"acd89578-60c3-4368-9b2c-59dc899d1a08","Type":"ContainerStarted","Data":"7c487b4404d09c599919395407ecbe9906fa34ec0eea0f7bd16313730253a6de"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.145904 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5xjmf"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.154802 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.155537 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.655489537 +0000 UTC m=+162.778329934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.163499 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.663469973 +0000 UTC m=+162.786310200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.163630 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.239673 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" event={"ID":"83770b88-0e8f-4356-b13e-a4deeb9c8b2a","Type":"ContainerStarted","Data":"087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.241291 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.270497 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.274207 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.77416431 +0000 UTC m=+162.897004537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.281173 4771 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5ld9n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.281315 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" podUID="83770b88-0e8f-4356-b13e-a4deeb9c8b2a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.319682 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qbzj" podStartSLOduration=134.319642468 podStartE2EDuration="2m14.319642468s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:02.314622806 +0000 UTC m=+162.437463033" watchObservedRunningTime="2026-01-29 09:09:02.319642468 +0000 UTC m=+162.442482695" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.329413 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" event={"ID":"2ef9891b-0ec1-4469-8701-57d0831b4046","Type":"ContainerStarted","Data":"262f4be663a31c0bf23037c99990a324dcacf3e769bbddcc4eec37e905dc1a36"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.347550 4771 generic.go:334] "Generic (PLEG): container finished" podID="bfcb03e2-b241-4df4-8295-33b5ad3eae58" containerID="c73167452c8541a4deb4efc828c997d9697f9efaa87f794efa68e93a899d8dc7" exitCode=0 Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.349662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" event={"ID":"bfcb03e2-b241-4df4-8295-33b5ad3eae58","Type":"ContainerDied","Data":"c73167452c8541a4deb4efc828c997d9697f9efaa87f794efa68e93a899d8dc7"} Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.357148 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.357731 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jzc5h" podStartSLOduration=134.357679566 podStartE2EDuration="2m14.357679566s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:02.350619436 +0000 UTC m=+162.473459663" watchObservedRunningTime="2026-01-29 09:09:02.357679566 +0000 UTC m=+162.480519793" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.368552 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.372574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.380858 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.880828572 +0000 UTC m=+163.003668809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.381105 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.404790 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.467917 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.467964 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.467976 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bdcnw"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.479254 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.480246 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:02.980196248 +0000 UTC m=+163.103036625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.488172 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w8f4k"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.496403 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kxgbp"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.496467 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mbqr8"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.498102 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv"] Jan 29 09:09:02 crc kubenswrapper[4771]: W0129 09:09:02.501409 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbafa5883_800f_430b_8ac9_580ad34cc571.slice/crio-12a1aea32870adeced829aecb95cd81d5ab12f30b8660e4504d01c39b6743dd1 WatchSource:0}: Error finding container 12a1aea32870adeced829aecb95cd81d5ab12f30b8660e4504d01c39b6743dd1: Status 404 returned error can't find the container with id 12a1aea32870adeced829aecb95cd81d5ab12f30b8660e4504d01c39b6743dd1 Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.573429 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" podStartSLOduration=134.573402749 podStartE2EDuration="2m14.573402749s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:02.572064821 +0000 UTC m=+162.694905048" watchObservedRunningTime="2026-01-29 09:09:02.573402749 +0000 UTC m=+162.696242976" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.583923 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.587083 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.592250 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.092224212 +0000 UTC m=+163.215064439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: W0129 09:09:02.594965 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a93775b_af07_44d2_b075_59838e7a8920.slice/crio-2c7e9962c292f291db63d602de0326028d21c2323bf743ed60df59cb30a44ecd WatchSource:0}: Error finding container 2c7e9962c292f291db63d602de0326028d21c2323bf743ed60df59cb30a44ecd: Status 404 returned error can't find the container with id 2c7e9962c292f291db63d602de0326028d21c2323bf743ed60df59cb30a44ecd Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.599750 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqgdw"] Jan 29 09:09:02 crc kubenswrapper[4771]: W0129 09:09:02.614817 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d60aa57_7b9b_43eb_a44d_704c92ce6a57.slice/crio-e9f2ae5d06045a320093ee194c8df105f0be90016b4560436e8236a552dca074 WatchSource:0}: Error finding container e9f2ae5d06045a320093ee194c8df105f0be90016b4560436e8236a552dca074: Status 404 returned error can't find the container with id e9f2ae5d06045a320093ee194c8df105f0be90016b4560436e8236a552dca074 Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.616989 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.628263 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wp8pt" podStartSLOduration=134.628233762 podStartE2EDuration="2m14.628233762s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:02.624019333 +0000 UTC m=+162.746859570" watchObservedRunningTime="2026-01-29 09:09:02.628233762 +0000 UTC m=+162.751073989" Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.643142 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.649902 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.667855 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.676457 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wszgv"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.691988 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.692993 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.192964517 +0000 UTC m=+163.315804744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.718861 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl"] Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.726238 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h84vs"] Jan 29 09:09:02 crc kubenswrapper[4771]: W0129 09:09:02.743257 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10c85b6c_532f_4691_8e70_4abc2c4f668c.slice/crio-10e92be868c0e2d92807ba8088ad3b8c222009add6366f09c48b0cab8030f499 WatchSource:0}: Error finding container 10e92be868c0e2d92807ba8088ad3b8c222009add6366f09c48b0cab8030f499: Status 404 returned error can't find the container with id 10e92be868c0e2d92807ba8088ad3b8c222009add6366f09c48b0cab8030f499 Jan 29 09:09:02 crc kubenswrapper[4771]: W0129 09:09:02.785356 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e399b2_7666_44a6_b886_5d941a985630.slice/crio-eb06e924531fb514022b8aec82bd1d8c0cc5070163af07ee6dceba474735f173 WatchSource:0}: Error finding container eb06e924531fb514022b8aec82bd1d8c0cc5070163af07ee6dceba474735f173: Status 404 returned error can't find the container with id eb06e924531fb514022b8aec82bd1d8c0cc5070163af07ee6dceba474735f173 Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.802496 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.803103 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.303086977 +0000 UTC m=+163.425927204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: W0129 09:09:02.823589 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34f0263f_c771_4ef0_91be_9d37f9ba6d60.slice/crio-fa15b8a353b0dd80737aaae20473e18e49869302be9e150774870a7c947b4478 WatchSource:0}: Error finding container fa15b8a353b0dd80737aaae20473e18e49869302be9e150774870a7c947b4478: Status 404 returned error can't find the container with id fa15b8a353b0dd80737aaae20473e18e49869302be9e150774870a7c947b4478 Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.905840 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:02 crc kubenswrapper[4771]: E0129 09:09:02.906637 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.406619141 +0000 UTC m=+163.529459368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.927089 4771 csr.go:261] certificate signing request csr-82q5v is approved, waiting to be issued Jan 29 09:09:02 crc kubenswrapper[4771]: I0129 09:09:02.927147 4771 csr.go:257] certificate signing request csr-82q5v is issued Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.015951 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.016382 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.51635496 +0000 UTC m=+163.639195187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.117552 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.118050 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.618023321 +0000 UTC m=+163.740863548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.219298 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.219928 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.719904497 +0000 UTC m=+163.842744724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.320371 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.321314 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.821260409 +0000 UTC m=+163.944100636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.392528 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h84vs" event={"ID":"39ef9676-ef1e-4410-a82a-3045b2986e88","Type":"ContainerStarted","Data":"7b58107a1cefe468ff21ff1b932f2163f11dfffbc8d83edc147c1112af2aa8a6"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.406829 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kxgbp" event={"ID":"a40a655e-56fc-4578-8dd9-6ae371433ea0","Type":"ContainerStarted","Data":"12bc9f332348f5ef0eaba7c7b7ca51d1e15d39a11abbd885560d289e2d95c12c"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.419829 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" event={"ID":"796d2283-9389-48d3-9e0a-e71cf4f58ce1","Type":"ContainerStarted","Data":"1ba673f17aad70161549d6b56a8707ecfc606648e9b2b0e0c9a9d2ae2f776b71"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.419919 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" event={"ID":"796d2283-9389-48d3-9e0a-e71cf4f58ce1","Type":"ContainerStarted","Data":"91d1340fbbc0d410edcb3ebe3048865c2894245d6adbd5173b7e3e42d8c31fdd"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.424592 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.425139 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:03.925121122 +0000 UTC m=+164.047961349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.464863 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" event={"ID":"30a2442c-28f9-4390-bd1a-06a7542576a8","Type":"ContainerStarted","Data":"acde7bbec4227977203a9ac6180e7d9228929d50427d5db812fc16127f64420b"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.465026 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w8f4k" event={"ID":"ed15f83f-f5b1-4cb8-8414-b143435f9eb4","Type":"ContainerStarted","Data":"c05575cf4ea5fd4cb92e318b0b0440bfc8d8780675a44dc72f28e236bf4c54a1"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.467302 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" event={"ID":"30d901bc-be28-4ddc-b46f-05fffb35ec40","Type":"ContainerStarted","Data":"69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.469480 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.476127 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" event={"ID":"f92b62e7-3351-4e65-a49d-49b6a6217796","Type":"ContainerStarted","Data":"2fbc87d3afbeef33ef1a63cab07da35c49d9d74daab62b6e0d5dbbde020375f5"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.489201 4771 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nn2z2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.489282 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" podUID="30d901bc-be28-4ddc-b46f-05fffb35ec40" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.498027 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8hkbq" podStartSLOduration=135.497969086 podStartE2EDuration="2m15.497969086s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:03.4663425 +0000 UTC m=+163.589182727" watchObservedRunningTime="2026-01-29 09:09:03.497969086 +0000 UTC m=+163.620809313" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.517832 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" podStartSLOduration=135.517808699 podStartE2EDuration="2m15.517808699s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:03.516597864 +0000 UTC m=+163.639438101" watchObservedRunningTime="2026-01-29 09:09:03.517808699 +0000 UTC m=+163.640648926" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.527768 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.529811 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.029783788 +0000 UTC m=+164.152624025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.547310 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" event={"ID":"bafa5883-800f-430b-8ac9-580ad34cc571","Type":"ContainerStarted","Data":"12a1aea32870adeced829aecb95cd81d5ab12f30b8660e4504d01c39b6743dd1"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.561866 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cpgl8" event={"ID":"2a126bb5-11e1-49ee-8270-687373b56481","Type":"ContainerStarted","Data":"afb6e4639ea3954c680a77f37c4685e0d29a0c8daf60310ded620b598c6eb049"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.561937 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cpgl8" event={"ID":"2a126bb5-11e1-49ee-8270-687373b56481","Type":"ContainerStarted","Data":"41af644f2820296a4318b5ecc21a3267729826ef5da90caebd76d2f3d8434837"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.606080 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" event={"ID":"10c85b6c-532f-4691-8e70-4abc2c4f668c","Type":"ContainerStarted","Data":"10e92be868c0e2d92807ba8088ad3b8c222009add6366f09c48b0cab8030f499"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.625667 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cpgl8" podStartSLOduration=6.625633844 podStartE2EDuration="6.625633844s" podCreationTimestamp="2026-01-29 09:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:03.624978265 +0000 UTC m=+163.747818502" watchObservedRunningTime="2026-01-29 09:09:03.625633844 +0000 UTC m=+163.748474071" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.641880 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.643139 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.143111149 +0000 UTC m=+164.265951376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.652278 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" event={"ID":"8bda7c1e-4453-4f83-8b3d-c8118debe332","Type":"ContainerStarted","Data":"32077c7ebc31dea1b895958ff22a232bc6ebced492a39e6470567545d548fadd"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.682311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" event={"ID":"7afa746d-e424-4032-9e51-35cf653e50ac","Type":"ContainerStarted","Data":"7d4183f8d474e1f3c7b5e9f589dcf4d92660977b7b52d9809cfcafbec77bc027"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.685173 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.707294 4771 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2dm8r container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.707379 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" podUID="7afa746d-e424-4032-9e51-35cf653e50ac" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.723969 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" event={"ID":"e6a19255-f81c-4f70-817b-0450703e4962","Type":"ContainerStarted","Data":"fce7a54a78a07a0225832691e7898adea74604603bb8ad11e926d9841aa025e3"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.724043 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" event={"ID":"e6a19255-f81c-4f70-817b-0450703e4962","Type":"ContainerStarted","Data":"1a2df9ab9dc3875de711037e2f4e9322df1e47276c53a84802ee26955c3d8fd2"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.745502 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.747777 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.247747064 +0000 UTC m=+164.370587301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.766066 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" event={"ID":"872b5467-e306-44a0-b142-baea0c170180","Type":"ContainerStarted","Data":"a68abb0b659419d1b0e9816f255c66f019965d042925debebbdd7a6289461557"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.775488 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7r6g2" podStartSLOduration=134.775465009 podStartE2EDuration="2m14.775465009s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:03.774373288 +0000 UTC m=+163.897213515" watchObservedRunningTime="2026-01-29 09:09:03.775465009 +0000 UTC m=+163.898305236" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.775688 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" podStartSLOduration=134.775682005 podStartE2EDuration="2m14.775682005s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:03.724486425 +0000 UTC m=+163.847326672" watchObservedRunningTime="2026-01-29 09:09:03.775682005 +0000 UTC m=+163.898522232" Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.795092 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" event={"ID":"8a93775b-af07-44d2-b075-59838e7a8920","Type":"ContainerStarted","Data":"2c7e9962c292f291db63d602de0326028d21c2323bf743ed60df59cb30a44ecd"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.808334 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" event={"ID":"d0e399b2-7666-44a6-b886-5d941a985630","Type":"ContainerStarted","Data":"eb06e924531fb514022b8aec82bd1d8c0cc5070163af07ee6dceba474735f173"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.848886 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.849472 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.349452986 +0000 UTC m=+164.472293223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.852021 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" event={"ID":"3d60aa57-7b9b-43eb-a44d-704c92ce6a57","Type":"ContainerStarted","Data":"e9f2ae5d06045a320093ee194c8df105f0be90016b4560436e8236a552dca074"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.872295 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" event={"ID":"7cc18dde-121c-4f72-a2a1-25c7a5559a5a","Type":"ContainerStarted","Data":"1d1ec02b48104c04e8ccfda1e3c29cadc84d89bb2d1a979ecce995292e6461e2"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.872370 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" event={"ID":"7cc18dde-121c-4f72-a2a1-25c7a5559a5a","Type":"ContainerStarted","Data":"488aebb6798aeae1651e21d2f324b18ad85792c0b74865ee3824fee26526a12c"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.877167 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" event={"ID":"7afa89b1-e90b-4b0c-961e-73f964e2c4f2","Type":"ContainerStarted","Data":"49e726d9ee329c6dbebfc82534f5501eaf8189f9ae85cf2f76c11cad8c71a221"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.877244 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" event={"ID":"7afa89b1-e90b-4b0c-961e-73f964e2c4f2","Type":"ContainerStarted","Data":"959d574fe7515d6bf4025be2a3093b21e20fb597708e001a3c0e334237dc36cb"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.935854 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 09:04:02 +0000 UTC, rotation deadline is 2026-11-06 10:49:14.980943 +0000 UTC Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.936407 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6745h40m11.044540221s for next certificate rotation Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.945099 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" event={"ID":"2fa1d696-380f-4007-99d9-2853d03c5782","Type":"ContainerStarted","Data":"33526bcda70825d6d454651aaa6a8b3261d00e70c5c579e47c747e3b317233b3"} Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.955866 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:03 crc kubenswrapper[4771]: E0129 09:09:03.957640 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.457616259 +0000 UTC m=+164.580456486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:03 crc kubenswrapper[4771]: I0129 09:09:03.975482 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kbglz" podStartSLOduration=135.975452115 podStartE2EDuration="2m15.975452115s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:03.959273686 +0000 UTC m=+164.082113913" watchObservedRunningTime="2026-01-29 09:09:03.975452115 +0000 UTC m=+164.098292352" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.020387 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-95rng" event={"ID":"72d00e01-7d77-4404-ab20-dcccc7764b69","Type":"ContainerStarted","Data":"d1d199eef2894bbd8167fb392ffe9f6ec488f52fc1ecd67c8925fc17535b7c92"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.053892 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wzrzm" event={"ID":"053f4fd4-3e0c-4d6f-ae98-c4ffcf92cc40","Type":"ContainerStarted","Data":"d61f9a5825d2478b03d883a3e3f26110aeb06378790146212ccf1ec7eb07b1b3"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.058073 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.081050 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.581009806 +0000 UTC m=+164.703850033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.089674 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" event={"ID":"dd247d04-6c97-4819-a31d-f1f7eb95d40a","Type":"ContainerStarted","Data":"9290348bb41224dc65dd2df572088bae8d8da764ba8c58bb3cf0ee3fbdef8f0a"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.089760 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" event={"ID":"dd247d04-6c97-4819-a31d-f1f7eb95d40a","Type":"ContainerStarted","Data":"b62244dcaf64bc7ab269c2f45e7966bce5b813e506aaa97c359bbea8d158a3d8"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.091842 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.117663 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" event={"ID":"acd89578-60c3-4368-9b2c-59dc899d1a08","Type":"ContainerStarted","Data":"18847ee89dd49216d9439816e953343d07d614da34b9b89a3450c45cbb7f19b9"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.141221 4771 patch_prober.go:28] interesting pod/console-operator-58897d9998-5xjmf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.141533 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" podUID="dd247d04-6c97-4819-a31d-f1f7eb95d40a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.162194 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.163574 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.663556285 +0000 UTC m=+164.786396512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.217871 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2jhkf" podStartSLOduration=135.217848223 podStartE2EDuration="2m15.217848223s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.087431028 +0000 UTC m=+164.210271265" watchObservedRunningTime="2026-01-29 09:09:04.217848223 +0000 UTC m=+164.340688450" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.226557 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-95rng" podStartSLOduration=136.226529169 podStartE2EDuration="2m16.226529169s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.216139595 +0000 UTC m=+164.338979822" watchObservedRunningTime="2026-01-29 09:09:04.226529169 +0000 UTC m=+164.349369396" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.233347 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" event={"ID":"47cda8f8-ad10-4898-a066-1c388df82ab4","Type":"ContainerStarted","Data":"357271b4f029b93b2ac42d0ea47873ec34570ef7c9842065fccc616a9fc099c2"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.267992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.279994 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.779977523 +0000 UTC m=+164.902817750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.281801 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wzrzm" podStartSLOduration=7.281779295 podStartE2EDuration="7.281779295s" podCreationTimestamp="2026-01-29 09:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.280531129 +0000 UTC m=+164.403371356" watchObservedRunningTime="2026-01-29 09:09:04.281779295 +0000 UTC m=+164.404619522" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.313429 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" event={"ID":"bf5607c1-6059-4a83-b10b-28de3d1b872a","Type":"ContainerStarted","Data":"00d7fc6b3f0310312e3bd3a8c20a34c58df6ad48af929212ddc4d96a63c6b42b"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.313877 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" event={"ID":"bf5607c1-6059-4a83-b10b-28de3d1b872a","Type":"ContainerStarted","Data":"5706d847e4880f006c456384a524e6ea9e1c11221bf9cee0237301354b400a30"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.330387 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" event={"ID":"ab0e857c-3b75-4351-82e1-5faeaf0317be","Type":"ContainerStarted","Data":"8d0ebca94c873a2cf81dcfe0f24b2fced19d92d27fd856b8d1ea42b76a2f7d83"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.360797 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" event={"ID":"8cda2f63-799c-4e05-894d-c0fe721cf974","Type":"ContainerStarted","Data":"244f7ca7877b0d49e9d04296616d20be9c2065883eff1967f69a2f5110d17b9c"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.360851 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" event={"ID":"8cda2f63-799c-4e05-894d-c0fe721cf974","Type":"ContainerStarted","Data":"951a93f11bbf0c657911bc5dd8c06ae8fd4b14af34a3a5dcc9694cdc3cc67da3"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.374380 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.375600 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" event={"ID":"7e132aa6-1055-48c5-9a8a-086f008c7a70","Type":"ContainerStarted","Data":"bbed4282c2f606f2628ae1a7b61d301623485c65273c397de0ec1e33810e1a22"} Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.376134 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.876104757 +0000 UTC m=+164.998944984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.379790 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" event={"ID":"2370fdf0-204b-4cbe-a001-8bf7193fb0dd","Type":"ContainerStarted","Data":"32b78548b3683765e031d471a28d13ec350f976a9c64c10e8052d69cc45101a5"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.390081 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" event={"ID":"34f0263f-c771-4ef0-91be-9d37f9ba6d60","Type":"ContainerStarted","Data":"fa15b8a353b0dd80737aaae20473e18e49869302be9e150774870a7c947b4478"} Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.424541 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.462018 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.462856 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.494398 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.497222 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:04.997203629 +0000 UTC m=+165.120043856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.509965 4771 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ztsgd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.510014 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" podUID="47cda8f8-ad10-4898-a066-1c388df82ab4" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.513834 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.544003 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:04 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:04 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:04 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.544087 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.546840 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" podStartSLOduration=136.546799494 podStartE2EDuration="2m16.546799494s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.54453461 +0000 UTC m=+164.667374847" watchObservedRunningTime="2026-01-29 09:09:04.546799494 +0000 UTC m=+164.669639731" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.553832 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-rsp6c" podStartSLOduration=135.553797332 podStartE2EDuration="2m15.553797332s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.436167419 +0000 UTC m=+164.559007646" watchObservedRunningTime="2026-01-29 09:09:04.553797332 +0000 UTC m=+164.676637569" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.598844 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.602051 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.102017678 +0000 UTC m=+165.224858075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.649274 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" podStartSLOduration=136.649220896 podStartE2EDuration="2m16.649220896s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.62007885 +0000 UTC m=+164.742919087" watchObservedRunningTime="2026-01-29 09:09:04.649220896 +0000 UTC m=+164.772061123" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.703967 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.705031 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.205008957 +0000 UTC m=+165.327849184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.812749 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.813246 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" podStartSLOduration=136.813216693 podStartE2EDuration="2m16.813216693s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.812323437 +0000 UTC m=+164.935163674" watchObservedRunningTime="2026-01-29 09:09:04.813216693 +0000 UTC m=+164.936056920" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.813922 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xvsw" podStartSLOduration=135.813913422 podStartE2EDuration="2m15.813913422s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.744017102 +0000 UTC m=+164.866857339" watchObservedRunningTime="2026-01-29 09:09:04.813913422 +0000 UTC m=+164.936753649" Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.813304 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.313279774 +0000 UTC m=+165.436120001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.918231 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:04 crc kubenswrapper[4771]: I0129 09:09:04.918847 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-28dgh" podStartSLOduration=135.918833915 podStartE2EDuration="2m15.918833915s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:04.903534352 +0000 UTC m=+165.026374579" watchObservedRunningTime="2026-01-29 09:09:04.918833915 +0000 UTC m=+165.041674142" Jan 29 09:09:04 crc kubenswrapper[4771]: E0129 09:09:04.919437 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.419409752 +0000 UTC m=+165.542250179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.023442 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.023827 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.52381129 +0000 UTC m=+165.646651517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.125518 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.126332 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.626318824 +0000 UTC m=+165.749159051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.228805 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.229498 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.729478007 +0000 UTC m=+165.852318234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.331412 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.333040 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.83297199 +0000 UTC m=+165.955812207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.395459 4771 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qdxtj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.396235 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" podUID="f6553555-ed45-445a-a12a-c332f7d8ac0e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.433524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" event={"ID":"3d60aa57-7b9b-43eb-a44d-704c92ce6a57","Type":"ContainerStarted","Data":"a5614789d8d98658010a5261186aaedb9c290428ee763e6651b27a216bcaace0"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.435540 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.454772 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.473123 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:05.962465959 +0000 UTC m=+166.085306186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.478163 4771 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xlvgv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.478262 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" podUID="3d60aa57-7b9b-43eb-a44d-704c92ce6a57" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.491367 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" event={"ID":"bafa5883-800f-430b-8ac9-580ad34cc571","Type":"ContainerStarted","Data":"8969ff99d7fe65051bbe87029cb64bbbe0b159e8b0cb7bb7936720702f18895d"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.494848 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" podStartSLOduration=136.494830856 podStartE2EDuration="2m16.494830856s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:05.49180724 +0000 UTC m=+165.614647487" watchObservedRunningTime="2026-01-29 09:09:05.494830856 +0000 UTC m=+165.617671083" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.518665 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:05 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:05 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:05 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.518806 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.533267 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qdxtj" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.560824 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" event={"ID":"bfcb03e2-b241-4df4-8295-33b5ad3eae58","Type":"ContainerStarted","Data":"41a88b08b90155b28ddc2f3ddec0e77ae510840f92190d144b8de1f7bc244eea"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.576984 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.578552 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.078530908 +0000 UTC m=+166.201371135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.599963 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" event={"ID":"10c85b6c-532f-4691-8e70-4abc2c4f668c","Type":"ContainerStarted","Data":"7750cab01320738d63a726af0b491a9347a1088a63ba857a7fc6b8bdbd035f87"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.607249 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kxgbp" event={"ID":"a40a655e-56fc-4578-8dd9-6ae371433ea0","Type":"ContainerStarted","Data":"bf6ea2cef89e3dec2430aa25767c6844703750fdac73d28a02814edc2d1d4e5d"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.608550 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.633260 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.633355 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.647468 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" event={"ID":"8a93775b-af07-44d2-b075-59838e7a8920","Type":"ContainerStarted","Data":"d4f06702a6570c2d26ee0740fac2137dffb151be9c70ee87a14e14e52430b5dd"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.649034 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.660433 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w8f4k" event={"ID":"ed15f83f-f5b1-4cb8-8414-b143435f9eb4","Type":"ContainerStarted","Data":"ddc38acd4f3a80a2973918ac74c0e9516f44a2ca4b27952fda4b6af6319a47b4"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.678316 4771 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bgssx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.678398 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" podUID="8a93775b-af07-44d2-b075-59838e7a8920" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.678998 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.679291 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.179264362 +0000 UTC m=+166.302104589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.679524 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.681674 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.1816651 +0000 UTC m=+166.304505327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.744141 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" podStartSLOduration=136.74412053 podStartE2EDuration="2m16.74412053s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:05.717444854 +0000 UTC m=+165.840285091" watchObservedRunningTime="2026-01-29 09:09:05.74412053 +0000 UTC m=+165.866960747" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.746814 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" event={"ID":"47cda8f8-ad10-4898-a066-1c388df82ab4","Type":"ContainerStarted","Data":"655fa3d1b120a84d7840f70de2761566a9d22e6da14d36fc2e8c0ef64f62f3c5"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.750899 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" event={"ID":"f92b62e7-3351-4e65-a49d-49b6a6217796","Type":"ContainerStarted","Data":"0a78765b621fa413b642e9e387c847b7457754220495cfe7fe72ea2816bf0cca"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.752363 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.761899 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" event={"ID":"2fa1d696-380f-4007-99d9-2853d03c5782","Type":"ContainerStarted","Data":"e7bec6db50702cc770cfb037677da126202476a88fd3e9e487357ecb5c8deb8f"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.763414 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" event={"ID":"ab0e857c-3b75-4351-82e1-5faeaf0317be","Type":"ContainerStarted","Data":"68a9ec857510c6a19b7cc6efb05d9bf76198c9c51fea2a997ea670dc695a98ac"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.771778 4771 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nqgdw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.771861 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" podUID="f92b62e7-3351-4e65-a49d-49b6a6217796" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.782823 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" event={"ID":"d0e399b2-7666-44a6-b886-5d941a985630","Type":"ContainerStarted","Data":"bd3134059be39c998fa4a2127df3343d054b07279de5bfa424c84ff596f9fdf8"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.783317 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.783847 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.283815935 +0000 UTC m=+166.406656342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.803152 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" event={"ID":"34f0263f-c771-4ef0-91be-9d37f9ba6d60","Type":"ContainerStarted","Data":"7242a091d3a4558ce0f2fecbc22f90093e08b2c87cdd1d9e39274650f6b94693"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.805729 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" event={"ID":"8bda7c1e-4453-4f83-8b3d-c8118debe332","Type":"ContainerStarted","Data":"e682f397e05d82e8c31b7e58aa5902d27cdc61523b73de9c311b22ff7d6019af"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.807562 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" event={"ID":"7e132aa6-1055-48c5-9a8a-086f008c7a70","Type":"ContainerStarted","Data":"700f0a2e4f17737641a8dd7c34223acbf06e89b1765c03c8904469a6bccb484e"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.809058 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" event={"ID":"30a2442c-28f9-4390-bd1a-06a7542576a8","Type":"ContainerStarted","Data":"cf5a7843c8afa4ac3c8fd0ad9b24576d5afed9b720683bffe3f81619d0ba63fb"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.826691 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" event={"ID":"7afa746d-e424-4032-9e51-35cf653e50ac","Type":"ContainerStarted","Data":"404508facd193d5626a62c8451d442d0e0608e03b7dc0a6f1e3b69699e9d3765"} Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.830957 4771 patch_prober.go:28] interesting pod/console-operator-58897d9998-5xjmf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.831006 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" podUID="dd247d04-6c97-4819-a31d-f1f7eb95d40a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.879801 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.889512 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.898029 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2dm8r" Jan 29 09:09:05 crc kubenswrapper[4771]: E0129 09:09:05.905844 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.405824102 +0000 UTC m=+166.528664329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:05 crc kubenswrapper[4771]: I0129 09:09:05.939489 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b6l78" podStartSLOduration=136.939463245 podStartE2EDuration="2m16.939463245s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:05.932159128 +0000 UTC m=+166.054999355" watchObservedRunningTime="2026-01-29 09:09:05.939463245 +0000 UTC m=+166.062303472" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:05.999826 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.000922 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.500901166 +0000 UTC m=+166.623741393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.102172 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.102535 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.602516805 +0000 UTC m=+166.725357032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.204747 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.204971 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.704931857 +0000 UTC m=+166.827772084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.205221 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.205879 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.705850633 +0000 UTC m=+166.828691050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.303968 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-kxgbp" podStartSLOduration=138.303935832 podStartE2EDuration="2m18.303935832s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:06.12738299 +0000 UTC m=+166.250223217" watchObservedRunningTime="2026-01-29 09:09:06.303935832 +0000 UTC m=+166.426776059" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.306263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.306671 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.806634889 +0000 UTC m=+166.929475116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.408644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.409225 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:06.909210125 +0000 UTC m=+167.032050352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.477234 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" podStartSLOduration=137.477203302 podStartE2EDuration="2m17.477203302s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:06.314218484 +0000 UTC m=+166.437058711" watchObservedRunningTime="2026-01-29 09:09:06.477203302 +0000 UTC m=+166.600043529" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.477621 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" podStartSLOduration=137.477611733 podStartE2EDuration="2m17.477611733s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:06.465092158 +0000 UTC m=+166.587932395" watchObservedRunningTime="2026-01-29 09:09:06.477611733 +0000 UTC m=+166.600451970" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.512172 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.512820 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.01279741 +0000 UTC m=+167.135637637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.513013 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:06 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:06 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:06 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.513091 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.587558 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcnw" podStartSLOduration=138.587528398 podStartE2EDuration="2m18.587528398s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:06.585665395 +0000 UTC m=+166.708505642" watchObservedRunningTime="2026-01-29 09:09:06.587528398 +0000 UTC m=+166.710368635" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.600261 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-clpx4" podStartSLOduration=138.600227527 podStartE2EDuration="2m18.600227527s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:06.52974589 +0000 UTC m=+166.652586117" watchObservedRunningTime="2026-01-29 09:09:06.600227527 +0000 UTC m=+166.723067754" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.614344 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.620457 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.12042947 +0000 UTC m=+167.243269697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.672046 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" podStartSLOduration=137.672019652 podStartE2EDuration="2m17.672019652s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:06.668373908 +0000 UTC m=+166.791214145" watchObservedRunningTime="2026-01-29 09:09:06.672019652 +0000 UTC m=+166.794859879" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.730419 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.730927 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.23090608 +0000 UTC m=+167.353746307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.821011 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v7ntz" podStartSLOduration=138.820995613 podStartE2EDuration="2m18.820995613s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:06.819363787 +0000 UTC m=+166.942204014" watchObservedRunningTime="2026-01-29 09:09:06.820995613 +0000 UTC m=+166.943835840" Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.834578 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.834969 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.334955678 +0000 UTC m=+167.457795905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.911248 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" event={"ID":"8bda7c1e-4453-4f83-8b3d-c8118debe332","Type":"ContainerStarted","Data":"f5d791e24085f1b5f4f1afaa78ef49a14f970b9fae98f784d127865565ce266d"} Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.939332 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:06 crc kubenswrapper[4771]: E0129 09:09:06.940108 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.440084967 +0000 UTC m=+167.562925184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.964462 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" event={"ID":"2fa1d696-380f-4007-99d9-2853d03c5782","Type":"ContainerStarted","Data":"81f7c0b33dc48896367b9d7b55b3acee0df7091101d89d16f56f2bef08ccdb2a"} Jan 29 09:09:06 crc kubenswrapper[4771]: I0129 09:09:06.993174 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" event={"ID":"7e132aa6-1055-48c5-9a8a-086f008c7a70","Type":"ContainerStarted","Data":"3bbe1c75098efe82309417ad4c745d6cba481c636d23f82248724cccb731af4a"} Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.030838 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" event={"ID":"bafa5883-800f-430b-8ac9-580ad34cc571","Type":"ContainerStarted","Data":"20374096d5e73ff8017b14e82e489ea1c8734089b045eb459f5f9f0d25fbe884"} Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.031900 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.048373 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.051443 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.551416902 +0000 UTC m=+167.674257129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.071053 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w8f4k" event={"ID":"ed15f83f-f5b1-4cb8-8414-b143435f9eb4","Type":"ContainerStarted","Data":"92018863e1de9e1613db50c83aad2841354eb7320238d78367fb3b74fb191192"} Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.071962 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-w8f4k" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.090343 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-stkl2" podStartSLOduration=138.090316124 podStartE2EDuration="2m18.090316124s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:06.95042559 +0000 UTC m=+167.073265837" watchObservedRunningTime="2026-01-29 09:09:07.090316124 +0000 UTC m=+167.213156351" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.112448 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h84vs" event={"ID":"39ef9676-ef1e-4410-a82a-3045b2986e88","Type":"ContainerStarted","Data":"703bb8a0c265d4cab327861ac1819f1bb24011a2f48e4450a8f166201a0ff065"} Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.143928 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9gkcl" event={"ID":"872b5467-e306-44a0-b142-baea0c170180","Type":"ContainerStarted","Data":"825c4c2feaf7840087002588e9fc30f5933352568341d2a8a3d0ccbd806e451f"} Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.144940 4771 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bgssx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.145007 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" podUID="8a93775b-af07-44d2-b075-59838e7a8920" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.145404 4771 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xlvgv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.145612 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" podUID="3d60aa57-7b9b-43eb-a44d-704c92ce6a57" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.146219 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.147084 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.146778 4771 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nqgdw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.147303 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" podUID="f92b62e7-3351-4e65-a49d-49b6a6217796" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.157866 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.159709 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.659661129 +0000 UTC m=+167.782501356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.164577 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qvr68" podStartSLOduration=138.164553287 podStartE2EDuration="2m18.164553287s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:07.091205469 +0000 UTC m=+167.214045716" watchObservedRunningTime="2026-01-29 09:09:07.164553287 +0000 UTC m=+167.287393514" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.176403 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mbqr8" podStartSLOduration=139.176382943 podStartE2EDuration="2m19.176382943s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:07.143231513 +0000 UTC m=+167.266071760" watchObservedRunningTime="2026-01-29 09:09:07.176382943 +0000 UTC m=+167.299223170" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.201255 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" podStartSLOduration=138.201227957 podStartE2EDuration="2m18.201227957s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:07.191332626 +0000 UTC m=+167.314172853" watchObservedRunningTime="2026-01-29 09:09:07.201227957 +0000 UTC m=+167.324068174" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.255072 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-w8f4k" podStartSLOduration=10.255055652 podStartE2EDuration="10.255055652s" podCreationTimestamp="2026-01-29 09:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:07.25322967 +0000 UTC m=+167.376069897" watchObservedRunningTime="2026-01-29 09:09:07.255055652 +0000 UTC m=+167.377895879" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.263752 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.268262 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.768241225 +0000 UTC m=+167.891081452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.306824 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wszgv" podStartSLOduration=138.306789538 podStartE2EDuration="2m18.306789538s" podCreationTimestamp="2026-01-29 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:07.304886224 +0000 UTC m=+167.427726451" watchObservedRunningTime="2026-01-29 09:09:07.306789538 +0000 UTC m=+167.429629775" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.367579 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.368143 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.868117205 +0000 UTC m=+167.990957432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.470081 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.470654 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:07.97062741 +0000 UTC m=+168.093467637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.511231 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.512124 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.513081 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:07 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:07 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:07 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.513168 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.520412 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.521111 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.537355 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.574214 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.574976 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.074948145 +0000 UTC m=+168.197788372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.677908 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06090bf7-e79d-450e-98e0-40acc98884c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06090bf7-e79d-450e-98e0-40acc98884c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.677966 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06090bf7-e79d-450e-98e0-40acc98884c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06090bf7-e79d-450e-98e0-40acc98884c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.678023 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.678466 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.178448588 +0000 UTC m=+168.301288815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.778981 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.779231 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.279182072 +0000 UTC m=+168.402022309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.779316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06090bf7-e79d-450e-98e0-40acc98884c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06090bf7-e79d-450e-98e0-40acc98884c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.779401 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06090bf7-e79d-450e-98e0-40acc98884c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"06090bf7-e79d-450e-98e0-40acc98884c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.779539 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.779862 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06090bf7-e79d-450e-98e0-40acc98884c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06090bf7-e79d-450e-98e0-40acc98884c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.779939 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.279920813 +0000 UTC m=+168.402761040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.811812 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06090bf7-e79d-450e-98e0-40acc98884c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"06090bf7-e79d-450e-98e0-40acc98884c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.881040 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.881424 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.381406699 +0000 UTC m=+168.504246926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.896367 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:07 crc kubenswrapper[4771]: I0129 09:09:07.986933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:07 crc kubenswrapper[4771]: E0129 09:09:07.987587 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.487559617 +0000 UTC m=+168.610400014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.090515 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.091023 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.590999797 +0000 UTC m=+168.713840024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.165801 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h84vs" event={"ID":"39ef9676-ef1e-4410-a82a-3045b2986e88","Type":"ContainerStarted","Data":"57953588c6557ea38c0b1f93e9e69512d0612c76655990e42df73cc8e668f5b6"} Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.168383 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.168469 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.168783 4771 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nqgdw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.168869 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" podUID="f92b62e7-3351-4e65-a49d-49b6a6217796" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.182663 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hwlvt"] Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.184384 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.199390 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.199914 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.699894493 +0000 UTC m=+168.822734720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.202135 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.219876 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwlvt"] Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.303151 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.304053 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.804006563 +0000 UTC m=+168.926846790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.304397 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-utilities\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.304578 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqtn\" (UniqueName: \"kubernetes.io/projected/8baa3171-ca7e-40a7-bd19-dfae944704fa-kube-api-access-9tqtn\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.304911 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.305022 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-catalog-content\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.311172 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.811151165 +0000 UTC m=+168.933991382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.389126 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sffpf"] Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.390370 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.406316 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.406512 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-catalog-content\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.406591 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.906530568 +0000 UTC m=+169.029371035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.406735 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-utilities\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.407002 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tqtn\" (UniqueName: \"kubernetes.io/projected/8baa3171-ca7e-40a7-bd19-dfae944704fa-kube-api-access-9tqtn\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.407119 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-catalog-content\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.407438 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-utilities\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.407458 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.408009 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:08.907990189 +0000 UTC m=+169.030830426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.417346 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.479779 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sffpf"] Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.507915 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tqtn\" (UniqueName: \"kubernetes.io/projected/8baa3171-ca7e-40a7-bd19-dfae944704fa-kube-api-access-9tqtn\") pod \"community-operators-hwlvt\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.508667 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.509183 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/e6838739-1976-46a6-891d-e2a7ee919777-kube-api-access-fpngr\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.509252 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-utilities\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.509371 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-catalog-content\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.509562 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.009539047 +0000 UTC m=+169.132379274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.526177 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.539882 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:08 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:08 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:08 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.539970 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.612867 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/e6838739-1976-46a6-891d-e2a7ee919777-kube-api-access-fpngr\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.612970 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-utilities\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.613039 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-catalog-content\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.613104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.613606 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.113582955 +0000 UTC m=+169.236423192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.614642 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-utilities\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.623955 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-catalog-content\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.734291 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.734970 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.234942403 +0000 UTC m=+169.357782640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.757285 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j7lqm"] Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.757993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/e6838739-1976-46a6-891d-e2a7ee919777-kube-api-access-fpngr\") pod \"certified-operators-sffpf\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.758923 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.845218 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.846160 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.346142944 +0000 UTC m=+169.468983171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.919985 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j7lqm"] Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.947683 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.948098 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-utilities\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.948140 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-catalog-content\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.948218 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vgq\" (UniqueName: \"kubernetes.io/projected/d60426c7-faf6-4300-9ed0-160a76d81782-kube-api-access-j5vgq\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:08 crc kubenswrapper[4771]: E0129 09:09:08.948352 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.448330129 +0000 UTC m=+169.571170356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.989736 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.990784 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.995662 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rxfcv"] Jan 29 09:09:08 crc kubenswrapper[4771]: I0129 09:09:08.996999 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.010772 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.065618 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.068254 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-utilities\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.068323 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.068345 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-catalog-content\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.068423 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5vgq\" (UniqueName: \"kubernetes.io/projected/d60426c7-faf6-4300-9ed0-160a76d81782-kube-api-access-j5vgq\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.069424 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-utilities\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.071033 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-catalog-content\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.098002 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.154928 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.654901583 +0000 UTC m=+169.777741810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.173307 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.173847 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vkf8\" (UniqueName: \"kubernetes.io/projected/bf879871-35a5-4d77-b71c-672c2f524993-kube-api-access-6vkf8\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.173927 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e712f396-53c6-4f25-85e9-c358a1855644-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e712f396-53c6-4f25-85e9-c358a1855644\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.173981 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-utilities\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.174039 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e712f396-53c6-4f25-85e9-c358a1855644-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e712f396-53c6-4f25-85e9-c358a1855644\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.174095 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-catalog-content\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.174310 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.674283622 +0000 UTC m=+169.797123849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.174428 4771 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bgssx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.195960 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" podUID="8a93775b-af07-44d2-b075-59838e7a8920" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.219067 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxfcv"] Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.221662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h84vs" event={"ID":"39ef9676-ef1e-4410-a82a-3045b2986e88","Type":"ContainerStarted","Data":"a5c5efa8b25048926a54f14c97b375dfb421fbb0045278d5295180c2fd5d5a40"} Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.247318 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.274405 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5vgq\" (UniqueName: \"kubernetes.io/projected/d60426c7-faf6-4300-9ed0-160a76d81782-kube-api-access-j5vgq\") pod \"community-operators-j7lqm\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.302656 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e712f396-53c6-4f25-85e9-c358a1855644-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e712f396-53c6-4f25-85e9-c358a1855644\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.302761 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.302799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-utilities\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.302895 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e712f396-53c6-4f25-85e9-c358a1855644-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e712f396-53c6-4f25-85e9-c358a1855644\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.302942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-catalog-content\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.303021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vkf8\" (UniqueName: \"kubernetes.io/projected/bf879871-35a5-4d77-b71c-672c2f524993-kube-api-access-6vkf8\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.309181 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.809158924 +0000 UTC m=+169.931999151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.309741 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-utilities\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.315291 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-catalog-content\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.315499 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e712f396-53c6-4f25-85e9-c358a1855644-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e712f396-53c6-4f25-85e9-c358a1855644\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.400232 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.400380 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.405783 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.407201 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:09.907177151 +0000 UTC m=+170.030017378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.465039 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.465942 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.482116 4771 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ztsgd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]log ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]etcd ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 09:09:09 crc kubenswrapper[4771]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 09:09:09 crc kubenswrapper[4771]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 09:09:09 crc kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 09:09:09 crc kubenswrapper[4771]: livez check failed Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.482219 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" podUID="47cda8f8-ad10-4898-a066-1c388df82ab4" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.500790 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.509004 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.509566 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:10.009545902 +0000 UTC m=+170.132386119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.522722 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:09 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:09 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:09 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.522833 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.524682 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vkf8\" (UniqueName: \"kubernetes.io/projected/bf879871-35a5-4d77-b71c-672c2f524993-kube-api-access-6vkf8\") pod \"certified-operators-rxfcv\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.531383 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e712f396-53c6-4f25-85e9-c358a1855644-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e712f396-53c6-4f25-85e9-c358a1855644\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.568391 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwlvt"] Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.610590 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.611032 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:10.111010307 +0000 UTC m=+170.233850524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.698180 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.719220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.719830 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:10.219808449 +0000 UTC m=+170.342648676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.758686 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.821826 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.822556 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:10.32252869 +0000 UTC m=+170.445368917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.889771 4771 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.954261 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:09 crc kubenswrapper[4771]: E0129 09:09:09.954988 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:10.454963693 +0000 UTC m=+170.577803910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.983283 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:09:09 crc kubenswrapper[4771]: I0129 09:09:09.983332 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.011196 4771 patch_prober.go:28] interesting pod/console-f9d7485db-jzc5h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.011273 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jzc5h" podUID="2fd142c7-125b-41ad-a645-c1eac4caa96b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.057646 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:10 crc kubenswrapper[4771]: E0129 09:09:10.058015 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:10.557968261 +0000 UTC m=+170.680808488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.058232 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:10 crc kubenswrapper[4771]: E0129 09:09:10.060414 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 09:09:10.56040074 +0000 UTC m=+170.683241157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jcdkc" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.075136 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5xjmf" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.168782 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:10 crc kubenswrapper[4771]: E0129 09:09:10.171038 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 09:09:10.671005004 +0000 UTC m=+170.793845231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.199391 4771 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T09:09:09.889812156Z","Handler":null,"Name":""} Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.209457 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b5q79"] Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.211184 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.238573 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.240642 4771 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.240718 4771 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.272075 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5q79"] Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.273256 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-catalog-content\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.273356 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-utilities\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.273400 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf884\" (UniqueName: \"kubernetes.io/projected/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-kube-api-access-lf884\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.273454 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.310842 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwlvt" event={"ID":"8baa3171-ca7e-40a7-bd19-dfae944704fa","Type":"ContainerStarted","Data":"f5bbbe22190017c5f9b7e27da21460d4123131cca77ec82def5195fe2d3266ed"} Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.311890 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.311922 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.337590 4771 generic.go:334] "Generic (PLEG): container finished" podID="8cda2f63-799c-4e05-894d-c0fe721cf974" containerID="244f7ca7877b0d49e9d04296616d20be9c2065883eff1967f69a2f5110d17b9c" exitCode=0 Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.337731 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" event={"ID":"8cda2f63-799c-4e05-894d-c0fe721cf974","Type":"ContainerDied","Data":"244f7ca7877b0d49e9d04296616d20be9c2065883eff1967f69a2f5110d17b9c"} Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.370561 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h84vs" event={"ID":"39ef9676-ef1e-4410-a82a-3045b2986e88","Type":"ContainerStarted","Data":"72aca805c9cd642e6245643c056420bd5b2cddb49e83690b9372bb1fcc7067a9"} Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.376524 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf884\" (UniqueName: \"kubernetes.io/projected/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-kube-api-access-lf884\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.376661 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-catalog-content\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.376783 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-utilities\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.378458 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-utilities\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.379913 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-catalog-content\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.381483 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bgssx" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.383301 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"06090bf7-e79d-450e-98e0-40acc98884c5","Type":"ContainerStarted","Data":"39525dbb589894b84330048365516b711a028edab2d7a55ae1228a4a4a4d9060"} Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.396495 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.396573 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.397391 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.397432 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.401001 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dtnz2" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.412209 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xlvgv" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.449426 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j7lqm"] Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.452651 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.458413 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf884\" (UniqueName: \"kubernetes.io/projected/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-kube-api-access-lf884\") pod \"redhat-marketplace-b5q79\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.469751 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h84vs" podStartSLOduration=13.469717918 podStartE2EDuration="13.469717918s" podCreationTimestamp="2026-01-29 09:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:10.426851333 +0000 UTC m=+170.549691570" watchObservedRunningTime="2026-01-29 09:09:10.469717918 +0000 UTC m=+170.592558145" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.533519 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.590832 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:10 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:10 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:10 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.590935 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.614176 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.632777 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4qs84"] Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.653126 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.674111 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qs84"] Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.685945 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jcdkc\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.688272 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sffpf"] Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.770865 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.771300 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-catalog-content\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.771364 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-482lq\" (UniqueName: \"kubernetes.io/projected/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-kube-api-access-482lq\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.771407 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-utilities\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.784588 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.814019 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.869655 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.873243 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-catalog-content\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.876030 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-482lq\" (UniqueName: \"kubernetes.io/projected/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-kube-api-access-482lq\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.876118 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-utilities\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.876445 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-utilities\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.874182 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-catalog-content\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.901714 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 09:09:10 crc kubenswrapper[4771]: I0129 09:09:10.936890 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-482lq\" (UniqueName: \"kubernetes.io/projected/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-kube-api-access-482lq\") pod \"redhat-marketplace-4qs84\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.018215 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.189811 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxfcv"] Jan 29 09:09:11 crc kubenswrapper[4771]: W0129 09:09:11.275228 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf879871_35a5_4d77_b71c_672c2f524993.slice/crio-97169004d7f3027a65f7424f4453d9295e78352e5bb76af9e56429f004cf5269 WatchSource:0}: Error finding container 97169004d7f3027a65f7424f4453d9295e78352e5bb76af9e56429f004cf5269: Status 404 returned error can't find the container with id 97169004d7f3027a65f7424f4453d9295e78352e5bb76af9e56429f004cf5269 Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.361903 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5q79"] Jan 29 09:09:11 crc kubenswrapper[4771]: W0129 09:09:11.372822 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc279bbe_fdb9_4371_afaf_e2573ea04ce2.slice/crio-16308c45fb65079b77f5b0d1eb46c9b416c00f85e98916dd43141d3ac05b54ec WatchSource:0}: Error finding container 16308c45fb65079b77f5b0d1eb46c9b416c00f85e98916dd43141d3ac05b54ec: Status 404 returned error can't find the container with id 16308c45fb65079b77f5b0d1eb46c9b416c00f85e98916dd43141d3ac05b54ec Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.391674 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.396810 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5q79" event={"ID":"fc279bbe-fdb9-4371-afaf-e2573ea04ce2","Type":"ContainerStarted","Data":"16308c45fb65079b77f5b0d1eb46c9b416c00f85e98916dd43141d3ac05b54ec"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.398362 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e712f396-53c6-4f25-85e9-c358a1855644","Type":"ContainerStarted","Data":"253aa8df793094b18938ea142d188e03e3508b9392ac95039527c61ef7a38cfd"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.399726 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxfcv" event={"ID":"bf879871-35a5-4d77-b71c-672c2f524993","Type":"ContainerStarted","Data":"97169004d7f3027a65f7424f4453d9295e78352e5bb76af9e56429f004cf5269"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.401116 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/938d1706-ae32-445f-b1b0-6cacad136ef8-metrics-certs\") pod \"network-metrics-daemon-lzs9r\" (UID: \"938d1706-ae32-445f-b1b0-6cacad136ef8\") " pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.405012 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"06090bf7-e79d-450e-98e0-40acc98884c5","Type":"ContainerStarted","Data":"659c66b6ea63ed15eb2d06e61321af06444496d7d098140e2e429f621d6a90e9"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.409105 4771 generic.go:334] "Generic (PLEG): container finished" podID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerID="7670b5785e9044afd3132706311067cd5a2197f862eee172f6ae13b19e4c7244" exitCode=0 Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.409254 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwlvt" event={"ID":"8baa3171-ca7e-40a7-bd19-dfae944704fa","Type":"ContainerDied","Data":"7670b5785e9044afd3132706311067cd5a2197f862eee172f6ae13b19e4c7244"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.411676 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.415089 4771 generic.go:334] "Generic (PLEG): container finished" podID="e6838739-1976-46a6-891d-e2a7ee919777" containerID="18a8aa64171be8a556b9d7999fd79b88ad8fd53cffb0fb169535113ce4db9987" exitCode=0 Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.415307 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sffpf" event={"ID":"e6838739-1976-46a6-891d-e2a7ee919777","Type":"ContainerDied","Data":"18a8aa64171be8a556b9d7999fd79b88ad8fd53cffb0fb169535113ce4db9987"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.415352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sffpf" event={"ID":"e6838739-1976-46a6-891d-e2a7ee919777","Type":"ContainerStarted","Data":"6d1bc4ba9a136e0b8c53de1f9dc403094477e0265fde1b49bf4ce6c6a21912ad"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.417625 4771 generic.go:334] "Generic (PLEG): container finished" podID="d60426c7-faf6-4300-9ed0-160a76d81782" containerID="f359effebd4f093b212cdf2ac2e319b1503a2c703dd1f69a0f4715acc9eb5eca" exitCode=0 Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.417712 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7lqm" event={"ID":"d60426c7-faf6-4300-9ed0-160a76d81782","Type":"ContainerDied","Data":"f359effebd4f093b212cdf2ac2e319b1503a2c703dd1f69a0f4715acc9eb5eca"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.417751 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7lqm" event={"ID":"d60426c7-faf6-4300-9ed0-160a76d81782","Type":"ContainerStarted","Data":"3325c1adc2f671dab7a5d79ae30019c158639ce50e61cd7274b4c30f49a93ff6"} Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.435099 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jcdkc"] Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.443012 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.442976373 podStartE2EDuration="4.442976373s" podCreationTimestamp="2026-01-29 09:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:11.442749637 +0000 UTC m=+171.565589894" watchObservedRunningTime="2026-01-29 09:09:11.442976373 +0000 UTC m=+171.565816620" Jan 29 09:09:11 crc kubenswrapper[4771]: W0129 09:09:11.460000 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe9ebbbe_af6e_409d_8039_db5fb66d062b.slice/crio-048db9a5bac80cd495a14b5af624f140ff7677fad33da7f6108aa8cdceaa81d2 WatchSource:0}: Error finding container 048db9a5bac80cd495a14b5af624f140ff7677fad33da7f6108aa8cdceaa81d2: Status 404 returned error can't find the container with id 048db9a5bac80cd495a14b5af624f140ff7677fad33da7f6108aa8cdceaa81d2 Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.460335 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lzs9r" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.538786 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:11 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:11 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:11 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.538948 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.555588 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vphmv"] Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.568157 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.574115 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.586901 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qs84"] Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.604815 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-utilities\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.604871 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-catalog-content\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.604907 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phwm7\" (UniqueName: \"kubernetes.io/projected/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-kube-api-access-phwm7\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.605104 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vphmv"] Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.706286 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phwm7\" (UniqueName: \"kubernetes.io/projected/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-kube-api-access-phwm7\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.706472 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-utilities\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.706512 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-catalog-content\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.707087 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-catalog-content\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.707258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-utilities\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.737341 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phwm7\" (UniqueName: \"kubernetes.io/projected/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-kube-api-access-phwm7\") pod \"redhat-operators-vphmv\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.971351 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-478nk"] Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.973003 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.993615 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:09:11 crc kubenswrapper[4771]: I0129 09:09:11.999213 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lzs9r"] Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.009046 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-478nk"] Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.010824 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-utilities\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.010884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ztlz\" (UniqueName: \"kubernetes.io/projected/5d6ca13b-901b-4de9-bf79-494866c7ebdd-kube-api-access-8ztlz\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.010907 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-catalog-content\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.111882 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.113977 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-utilities\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.114040 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ztlz\" (UniqueName: \"kubernetes.io/projected/5d6ca13b-901b-4de9-bf79-494866c7ebdd-kube-api-access-8ztlz\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.114075 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-catalog-content\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.116960 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-catalog-content\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.117296 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-utilities\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.144481 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ztlz\" (UniqueName: \"kubernetes.io/projected/5d6ca13b-901b-4de9-bf79-494866c7ebdd-kube-api-access-8ztlz\") pod \"redhat-operators-478nk\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.155161 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.158862 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.216626 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cda2f63-799c-4e05-894d-c0fe721cf974-config-volume\") pod \"8cda2f63-799c-4e05-894d-c0fe721cf974\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.216780 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cda2f63-799c-4e05-894d-c0fe721cf974-secret-volume\") pod \"8cda2f63-799c-4e05-894d-c0fe721cf974\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.216912 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m95rq\" (UniqueName: \"kubernetes.io/projected/8cda2f63-799c-4e05-894d-c0fe721cf974-kube-api-access-m95rq\") pod \"8cda2f63-799c-4e05-894d-c0fe721cf974\" (UID: \"8cda2f63-799c-4e05-894d-c0fe721cf974\") " Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.219978 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cda2f63-799c-4e05-894d-c0fe721cf974-config-volume" (OuterVolumeSpecName: "config-volume") pod "8cda2f63-799c-4e05-894d-c0fe721cf974" (UID: "8cda2f63-799c-4e05-894d-c0fe721cf974"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.223223 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cda2f63-799c-4e05-894d-c0fe721cf974-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.239494 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cda2f63-799c-4e05-894d-c0fe721cf974-kube-api-access-m95rq" (OuterVolumeSpecName: "kube-api-access-m95rq") pod "8cda2f63-799c-4e05-894d-c0fe721cf974" (UID: "8cda2f63-799c-4e05-894d-c0fe721cf974"). InnerVolumeSpecName "kube-api-access-m95rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.244083 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cda2f63-799c-4e05-894d-c0fe721cf974-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8cda2f63-799c-4e05-894d-c0fe721cf974" (UID: "8cda2f63-799c-4e05-894d-c0fe721cf974"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.325149 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cda2f63-799c-4e05-894d-c0fe721cf974-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.325232 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m95rq\" (UniqueName: \"kubernetes.io/projected/8cda2f63-799c-4e05-894d-c0fe721cf974-kube-api-access-m95rq\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.521831 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:12 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:12 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:12 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.522405 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.533014 4771 generic.go:334] "Generic (PLEG): container finished" podID="bf879871-35a5-4d77-b71c-672c2f524993" containerID="777eb793c7ff67c99090a65804fcec98ca671c71d29e69be15f3f51a120283ce" exitCode=0 Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.533235 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxfcv" event={"ID":"bf879871-35a5-4d77-b71c-672c2f524993","Type":"ContainerDied","Data":"777eb793c7ff67c99090a65804fcec98ca671c71d29e69be15f3f51a120283ce"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.567551 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.567670 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz" event={"ID":"8cda2f63-799c-4e05-894d-c0fe721cf974","Type":"ContainerDied","Data":"951a93f11bbf0c657911bc5dd8c06ae8fd4b14af34a3a5dcc9694cdc3cc67da3"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.567898 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951a93f11bbf0c657911bc5dd8c06ae8fd4b14af34a3a5dcc9694cdc3cc67da3" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.572208 4771 generic.go:334] "Generic (PLEG): container finished" podID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerID="aa8d61b0bcee6e2c65cdc4b6f4a63b506900990e331a05b24f4b1d43d98df1bb" exitCode=0 Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.572297 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qs84" event={"ID":"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b","Type":"ContainerDied","Data":"aa8d61b0bcee6e2c65cdc4b6f4a63b506900990e331a05b24f4b1d43d98df1bb"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.572329 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qs84" event={"ID":"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b","Type":"ContainerStarted","Data":"595a22fdee8437cdcb4b8ecb91884d9f9143f393a1193de26520030646c0d605"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.579596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" event={"ID":"938d1706-ae32-445f-b1b0-6cacad136ef8","Type":"ContainerStarted","Data":"47af0ba6b7fb262f096f7441ff9d487566cbc4ec0c333aa0c085321dd7c7305c"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.616049 4771 generic.go:334] "Generic (PLEG): container finished" podID="06090bf7-e79d-450e-98e0-40acc98884c5" containerID="659c66b6ea63ed15eb2d06e61321af06444496d7d098140e2e429f621d6a90e9" exitCode=0 Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.616483 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"06090bf7-e79d-450e-98e0-40acc98884c5","Type":"ContainerDied","Data":"659c66b6ea63ed15eb2d06e61321af06444496d7d098140e2e429f621d6a90e9"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.634242 4771 generic.go:334] "Generic (PLEG): container finished" podID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerID="b0d036560c5dac298417c6dd8694cf023bd476badbad178ff2764d8cecfb68a4" exitCode=0 Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.636387 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5q79" event={"ID":"fc279bbe-fdb9-4371-afaf-e2573ea04ce2","Type":"ContainerDied","Data":"b0d036560c5dac298417c6dd8694cf023bd476badbad178ff2764d8cecfb68a4"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.668803 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e712f396-53c6-4f25-85e9-c358a1855644","Type":"ContainerStarted","Data":"690774c890bd5b990a848c983d96d757f96862182f31fb9206b3560de9eeb643"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.682569 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" event={"ID":"fe9ebbbe-af6e-409d-8039-db5fb66d062b","Type":"ContainerStarted","Data":"edb46a5a5030b5933ad0c787ad595f43f8cc1719b2f40e3d7eef76dcb5e0ff17"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.682757 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" event={"ID":"fe9ebbbe-af6e-409d-8039-db5fb66d062b","Type":"ContainerStarted","Data":"048db9a5bac80cd495a14b5af624f140ff7677fad33da7f6108aa8cdceaa81d2"} Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.682953 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.859975 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" podStartSLOduration=144.859937642 podStartE2EDuration="2m24.859937642s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:12.808444243 +0000 UTC m=+172.931284500" watchObservedRunningTime="2026-01-29 09:09:12.859937642 +0000 UTC m=+172.982777869" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.862953 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.862923327 podStartE2EDuration="4.862923327s" podCreationTimestamp="2026-01-29 09:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:12.850820804 +0000 UTC m=+172.973661041" watchObservedRunningTime="2026-01-29 09:09:12.862923327 +0000 UTC m=+172.985763564" Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.924417 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vphmv"] Jan 29 09:09:12 crc kubenswrapper[4771]: I0129 09:09:12.999259 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-478nk"] Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.512572 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:13 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:13 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:13 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.513990 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.784537 4771 generic.go:334] "Generic (PLEG): container finished" podID="e712f396-53c6-4f25-85e9-c358a1855644" containerID="690774c890bd5b990a848c983d96d757f96862182f31fb9206b3560de9eeb643" exitCode=0 Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.785189 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e712f396-53c6-4f25-85e9-c358a1855644","Type":"ContainerDied","Data":"690774c890bd5b990a848c983d96d757f96862182f31fb9206b3560de9eeb643"} Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.799645 4771 generic.go:334] "Generic (PLEG): container finished" podID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerID="4ecc763a2ad8addcdf507e44d92e6cf6e20fb37b71028706cb1b56804927b443" exitCode=0 Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.799779 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vphmv" event={"ID":"d0799f09-12e7-42a2-90c6-c1fb70e5c67f","Type":"ContainerDied","Data":"4ecc763a2ad8addcdf507e44d92e6cf6e20fb37b71028706cb1b56804927b443"} Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.799821 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vphmv" event={"ID":"d0799f09-12e7-42a2-90c6-c1fb70e5c67f","Type":"ContainerStarted","Data":"39f4bdf527d1f6c8a25108c86d674f2593fd148ea782f0846c6e8bfddf2295d6"} Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.816627 4771 generic.go:334] "Generic (PLEG): container finished" podID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerID="89c7497ee0081f84ff72cf8a897b39558e74ad7cc8e2b352460716ca7e847234" exitCode=0 Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.816802 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-478nk" event={"ID":"5d6ca13b-901b-4de9-bf79-494866c7ebdd","Type":"ContainerDied","Data":"89c7497ee0081f84ff72cf8a897b39558e74ad7cc8e2b352460716ca7e847234"} Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.816843 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-478nk" event={"ID":"5d6ca13b-901b-4de9-bf79-494866c7ebdd","Type":"ContainerStarted","Data":"cd46e6d93b1abfd1ee48f7c53223bc4808a7b3fa50aba2a2dd8d85207af80d68"} Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.870810 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" event={"ID":"938d1706-ae32-445f-b1b0-6cacad136ef8","Type":"ContainerStarted","Data":"2b08047acbbf33786e96da786debd8000edfb9244486e02473bda801a28bc253"} Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.871109 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lzs9r" event={"ID":"938d1706-ae32-445f-b1b0-6cacad136ef8","Type":"ContainerStarted","Data":"641fb6783bc0c04d32c8ebc882eb94c2674de5ff5d472c61e162477f9dae2b14"} Jan 29 09:09:13 crc kubenswrapper[4771]: I0129 09:09:13.897878 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lzs9r" podStartSLOduration=145.89784416 podStartE2EDuration="2m25.89784416s" podCreationTimestamp="2026-01-29 09:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:09:13.894490935 +0000 UTC m=+174.017331182" watchObservedRunningTime="2026-01-29 09:09:13.89784416 +0000 UTC m=+174.020684417" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.274569 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.274722 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.497765 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.509336 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:14 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:14 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:14 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.509452 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.513513 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ztsgd" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.653446 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.825270 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06090bf7-e79d-450e-98e0-40acc98884c5-kubelet-dir\") pod \"06090bf7-e79d-450e-98e0-40acc98884c5\" (UID: \"06090bf7-e79d-450e-98e0-40acc98884c5\") " Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.825362 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06090bf7-e79d-450e-98e0-40acc98884c5-kube-api-access\") pod \"06090bf7-e79d-450e-98e0-40acc98884c5\" (UID: \"06090bf7-e79d-450e-98e0-40acc98884c5\") " Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.825892 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06090bf7-e79d-450e-98e0-40acc98884c5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "06090bf7-e79d-450e-98e0-40acc98884c5" (UID: "06090bf7-e79d-450e-98e0-40acc98884c5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.826015 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06090bf7-e79d-450e-98e0-40acc98884c5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.851625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06090bf7-e79d-450e-98e0-40acc98884c5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "06090bf7-e79d-450e-98e0-40acc98884c5" (UID: "06090bf7-e79d-450e-98e0-40acc98884c5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.920472 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.920675 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"06090bf7-e79d-450e-98e0-40acc98884c5","Type":"ContainerDied","Data":"39525dbb589894b84330048365516b711a028edab2d7a55ae1228a4a4a4d9060"} Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.920749 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39525dbb589894b84330048365516b711a028edab2d7a55ae1228a4a4a4d9060" Jan 29 09:09:14 crc kubenswrapper[4771]: I0129 09:09:14.931493 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06090bf7-e79d-450e-98e0-40acc98884c5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.293090 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-w8f4k" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.508667 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:15 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:15 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:15 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.509347 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.590003 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.651089 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e712f396-53c6-4f25-85e9-c358a1855644-kube-api-access\") pod \"e712f396-53c6-4f25-85e9-c358a1855644\" (UID: \"e712f396-53c6-4f25-85e9-c358a1855644\") " Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.651208 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e712f396-53c6-4f25-85e9-c358a1855644-kubelet-dir\") pod \"e712f396-53c6-4f25-85e9-c358a1855644\" (UID: \"e712f396-53c6-4f25-85e9-c358a1855644\") " Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.651317 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e712f396-53c6-4f25-85e9-c358a1855644-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e712f396-53c6-4f25-85e9-c358a1855644" (UID: "e712f396-53c6-4f25-85e9-c358a1855644"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.651764 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e712f396-53c6-4f25-85e9-c358a1855644-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.674254 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e712f396-53c6-4f25-85e9-c358a1855644-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e712f396-53c6-4f25-85e9-c358a1855644" (UID: "e712f396-53c6-4f25-85e9-c358a1855644"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.756116 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e712f396-53c6-4f25-85e9-c358a1855644-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.960996 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e712f396-53c6-4f25-85e9-c358a1855644","Type":"ContainerDied","Data":"253aa8df793094b18938ea142d188e03e3508b9392ac95039527c61ef7a38cfd"} Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.961066 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253aa8df793094b18938ea142d188e03e3508b9392ac95039527c61ef7a38cfd" Jan 29 09:09:15 crc kubenswrapper[4771]: I0129 09:09:15.961091 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 09:09:16 crc kubenswrapper[4771]: I0129 09:09:16.506766 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:16 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:16 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:16 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:16 crc kubenswrapper[4771]: I0129 09:09:16.506860 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:17 crc kubenswrapper[4771]: I0129 09:09:17.510264 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:17 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:17 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:17 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:17 crc kubenswrapper[4771]: I0129 09:09:17.510811 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:18 crc kubenswrapper[4771]: I0129 09:09:18.508789 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:18 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:18 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:18 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:18 crc kubenswrapper[4771]: I0129 09:09:18.508922 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:19 crc kubenswrapper[4771]: I0129 09:09:19.514507 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:19 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:19 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:19 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:19 crc kubenswrapper[4771]: I0129 09:09:19.514967 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:19 crc kubenswrapper[4771]: I0129 09:09:19.961556 4771 patch_prober.go:28] interesting pod/console-f9d7485db-jzc5h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 09:09:19 crc kubenswrapper[4771]: I0129 09:09:19.961666 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jzc5h" podUID="2fd142c7-125b-41ad-a645-c1eac4caa96b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 09:09:20 crc kubenswrapper[4771]: I0129 09:09:20.396131 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:20 crc kubenswrapper[4771]: I0129 09:09:20.396838 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:20 crc kubenswrapper[4771]: I0129 09:09:20.396438 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:20 crc kubenswrapper[4771]: I0129 09:09:20.396989 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:20 crc kubenswrapper[4771]: I0129 09:09:20.509538 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:20 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:20 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:20 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:20 crc kubenswrapper[4771]: I0129 09:09:20.509643 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:21 crc kubenswrapper[4771]: I0129 09:09:21.509347 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:21 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:21 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:21 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:21 crc kubenswrapper[4771]: I0129 09:09:21.509512 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:22 crc kubenswrapper[4771]: I0129 09:09:22.507320 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:22 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:22 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:22 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:22 crc kubenswrapper[4771]: I0129 09:09:22.507413 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:23 crc kubenswrapper[4771]: I0129 09:09:23.511129 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:23 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:23 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:23 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:23 crc kubenswrapper[4771]: I0129 09:09:23.511303 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:24 crc kubenswrapper[4771]: I0129 09:09:24.507487 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 09:09:24 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 29 09:09:24 crc kubenswrapper[4771]: [+]process-running ok Jan 29 09:09:24 crc kubenswrapper[4771]: healthz check failed Jan 29 09:09:24 crc kubenswrapper[4771]: I0129 09:09:24.507644 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 09:09:25 crc kubenswrapper[4771]: I0129 09:09:25.508655 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:09:25 crc kubenswrapper[4771]: I0129 09:09:25.519349 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-95rng" Jan 29 09:09:29 crc kubenswrapper[4771]: I0129 09:09:29.971016 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:09:29 crc kubenswrapper[4771]: I0129 09:09:29.977480 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.396358 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.396449 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.396551 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.396683 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.396822 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.400047 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.400137 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.402357 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"bf6ea2cef89e3dec2430aa25767c6844703750fdac73d28a02814edc2d1d4e5d"} pod="openshift-console/downloads-7954f5f757-kxgbp" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.402626 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" containerID="cri-o://bf6ea2cef89e3dec2430aa25767c6844703750fdac73d28a02814edc2d1d4e5d" gracePeriod=2 Jan 29 09:09:30 crc kubenswrapper[4771]: I0129 09:09:30.795536 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:09:32 crc kubenswrapper[4771]: I0129 09:09:32.186857 4771 generic.go:334] "Generic (PLEG): container finished" podID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerID="bf6ea2cef89e3dec2430aa25767c6844703750fdac73d28a02814edc2d1d4e5d" exitCode=0 Jan 29 09:09:32 crc kubenswrapper[4771]: I0129 09:09:32.186941 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kxgbp" event={"ID":"a40a655e-56fc-4578-8dd9-6ae371433ea0","Type":"ContainerDied","Data":"bf6ea2cef89e3dec2430aa25767c6844703750fdac73d28a02814edc2d1d4e5d"} Jan 29 09:09:40 crc kubenswrapper[4771]: I0129 09:09:40.132347 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdvmd" Jan 29 09:09:40 crc kubenswrapper[4771]: I0129 09:09:40.395789 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:40 crc kubenswrapper[4771]: I0129 09:09:40.395889 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:44 crc kubenswrapper[4771]: I0129 09:09:44.271572 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:09:44 crc kubenswrapper[4771]: I0129 09:09:44.272839 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:09:47 crc kubenswrapper[4771]: E0129 09:09:47.756666 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 09:09:47 crc kubenswrapper[4771]: E0129 09:09:47.756920 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vkf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rxfcv_openshift-marketplace(bf879871-35a5-4d77-b71c-672c2f524993): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 09:09:47 crc kubenswrapper[4771]: E0129 09:09:47.758224 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rxfcv" podUID="bf879871-35a5-4d77-b71c-672c2f524993" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.237935 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 09:09:49 crc kubenswrapper[4771]: E0129 09:09:49.238923 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06090bf7-e79d-450e-98e0-40acc98884c5" containerName="pruner" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.238944 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="06090bf7-e79d-450e-98e0-40acc98884c5" containerName="pruner" Jan 29 09:09:49 crc kubenswrapper[4771]: E0129 09:09:49.238956 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cda2f63-799c-4e05-894d-c0fe721cf974" containerName="collect-profiles" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.238967 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cda2f63-799c-4e05-894d-c0fe721cf974" containerName="collect-profiles" Jan 29 09:09:49 crc kubenswrapper[4771]: E0129 09:09:49.238985 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e712f396-53c6-4f25-85e9-c358a1855644" containerName="pruner" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.238995 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e712f396-53c6-4f25-85e9-c358a1855644" containerName="pruner" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.239159 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cda2f63-799c-4e05-894d-c0fe721cf974" containerName="collect-profiles" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.239194 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e712f396-53c6-4f25-85e9-c358a1855644" containerName="pruner" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.239208 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="06090bf7-e79d-450e-98e0-40acc98884c5" containerName="pruner" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.239911 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.246262 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.246963 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.252074 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.415646 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f232a-1230-455c-bbb7-2050689742d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab3f232a-1230-455c-bbb7-2050689742d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.415799 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3f232a-1230-455c-bbb7-2050689742d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab3f232a-1230-455c-bbb7-2050689742d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.517775 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f232a-1230-455c-bbb7-2050689742d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab3f232a-1230-455c-bbb7-2050689742d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.517877 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f232a-1230-455c-bbb7-2050689742d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab3f232a-1230-455c-bbb7-2050689742d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.517918 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3f232a-1230-455c-bbb7-2050689742d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab3f232a-1230-455c-bbb7-2050689742d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.544829 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3f232a-1230-455c-bbb7-2050689742d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab3f232a-1230-455c-bbb7-2050689742d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:09:49 crc kubenswrapper[4771]: I0129 09:09:49.573222 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:09:50 crc kubenswrapper[4771]: I0129 09:09:50.396569 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:09:50 crc kubenswrapper[4771]: I0129 09:09:50.398604 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:09:53 crc kubenswrapper[4771]: E0129 09:09:53.190600 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rxfcv" podUID="bf879871-35a5-4d77-b71c-672c2f524993" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.436112 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.437050 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.449007 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.584287 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-var-lock\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.584378 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ad033e-b754-451b-a261-bdc556aaeaf5-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.584420 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.685885 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ad033e-b754-451b-a261-bdc556aaeaf5-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.685968 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.686026 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-var-lock\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.686118 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-var-lock\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.686165 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.708903 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ad033e-b754-451b-a261-bdc556aaeaf5-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:53 crc kubenswrapper[4771]: I0129 09:09:53.760962 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:09:54 crc kubenswrapper[4771]: E0129 09:09:54.103674 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 09:09:54 crc kubenswrapper[4771]: E0129 09:09:54.103914 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tqtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hwlvt_openshift-marketplace(8baa3171-ca7e-40a7-bd19-dfae944704fa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 09:09:54 crc kubenswrapper[4771]: E0129 09:09:54.105129 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hwlvt" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" Jan 29 09:10:00 crc kubenswrapper[4771]: I0129 09:10:00.396191 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:10:00 crc kubenswrapper[4771]: I0129 09:10:00.396733 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:10:00 crc kubenswrapper[4771]: E0129 09:10:00.864232 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hwlvt" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" Jan 29 09:10:00 crc kubenswrapper[4771]: E0129 09:10:00.989334 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 09:10:00 crc kubenswrapper[4771]: E0129 09:10:00.989650 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ztlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-478nk_openshift-marketplace(5d6ca13b-901b-4de9-bf79-494866c7ebdd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 09:10:00 crc kubenswrapper[4771]: E0129 09:10:00.990950 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-478nk" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" Jan 29 09:10:03 crc kubenswrapper[4771]: E0129 09:10:03.456058 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 09:10:03 crc kubenswrapper[4771]: E0129 09:10:03.456664 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phwm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vphmv_openshift-marketplace(d0799f09-12e7-42a2-90c6-c1fb70e5c67f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 09:10:03 crc kubenswrapper[4771]: E0129 09:10:03.457931 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vphmv" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" Jan 29 09:10:04 crc kubenswrapper[4771]: E0129 09:10:04.872151 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vphmv" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" Jan 29 09:10:04 crc kubenswrapper[4771]: E0129 09:10:04.872815 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-478nk" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" Jan 29 09:10:05 crc kubenswrapper[4771]: E0129 09:10:05.082258 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 09:10:05 crc kubenswrapper[4771]: E0129 09:10:05.083087 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5vgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j7lqm_openshift-marketplace(d60426c7-faf6-4300-9ed0-160a76d81782): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 09:10:05 crc kubenswrapper[4771]: E0129 09:10:05.086197 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j7lqm" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" Jan 29 09:10:05 crc kubenswrapper[4771]: E0129 09:10:05.194450 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 09:10:05 crc kubenswrapper[4771]: E0129 09:10:05.194592 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpngr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sffpf_openshift-marketplace(e6838739-1976-46a6-891d-e2a7ee919777): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 09:10:05 crc kubenswrapper[4771]: E0129 09:10:05.196114 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sffpf" podUID="e6838739-1976-46a6-891d-e2a7ee919777" Jan 29 09:10:05 crc kubenswrapper[4771]: I0129 09:10:05.334288 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 09:10:05 crc kubenswrapper[4771]: I0129 09:10:05.388107 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2ad033e-b754-451b-a261-bdc556aaeaf5","Type":"ContainerStarted","Data":"aa758536a20518b9cfb399dffb331ea2ac24404c513fd20c2c72523a50b865a2"} Jan 29 09:10:05 crc kubenswrapper[4771]: I0129 09:10:05.392853 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kxgbp" event={"ID":"a40a655e-56fc-4578-8dd9-6ae371433ea0","Type":"ContainerStarted","Data":"93b1c5ad14f13d2edea8c48013a419eac63b5a450d4a07654d386c50ee243d82"} Jan 29 09:10:05 crc kubenswrapper[4771]: I0129 09:10:05.392918 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:10:05 crc kubenswrapper[4771]: E0129 09:10:05.393149 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sffpf" podUID="e6838739-1976-46a6-891d-e2a7ee919777" Jan 29 09:10:05 crc kubenswrapper[4771]: I0129 09:10:05.393302 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:10:05 crc kubenswrapper[4771]: I0129 09:10:05.393388 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:10:05 crc kubenswrapper[4771]: E0129 09:10:05.394394 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j7lqm" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" Jan 29 09:10:05 crc kubenswrapper[4771]: I0129 09:10:05.409631 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 09:10:06 crc kubenswrapper[4771]: I0129 09:10:06.399387 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ab3f232a-1230-455c-bbb7-2050689742d1","Type":"ContainerStarted","Data":"9b9e28d15e0b4eab58320d620a4319039b8aea605f8539dc2c921c79cc9a431e"} Jan 29 09:10:06 crc kubenswrapper[4771]: I0129 09:10:06.400043 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:10:06 crc kubenswrapper[4771]: I0129 09:10:06.400106 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:10:06 crc kubenswrapper[4771]: E0129 09:10:06.589210 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 09:10:06 crc kubenswrapper[4771]: E0129 09:10:06.589423 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-482lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4qs84_openshift-marketplace(07d5db3c-f1d2-4b77-bcf7-07ef89073f9b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 09:10:06 crc kubenswrapper[4771]: E0129 09:10:06.590672 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4qs84" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" Jan 29 09:10:06 crc kubenswrapper[4771]: E0129 09:10:06.663255 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 09:10:06 crc kubenswrapper[4771]: E0129 09:10:06.663713 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lf884,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b5q79_openshift-marketplace(fc279bbe-fdb9-4371-afaf-e2573ea04ce2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 09:10:06 crc kubenswrapper[4771]: E0129 09:10:06.665316 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-b5q79" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" Jan 29 09:10:07 crc kubenswrapper[4771]: I0129 09:10:07.409004 4771 generic.go:334] "Generic (PLEG): container finished" podID="ab3f232a-1230-455c-bbb7-2050689742d1" containerID="0f460e944f8cae71b688b2d2ee59ef0a9cbefb2d1ee12a29d10b0e8a27c93a22" exitCode=0 Jan 29 09:10:07 crc kubenswrapper[4771]: I0129 09:10:07.409089 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ab3f232a-1230-455c-bbb7-2050689742d1","Type":"ContainerDied","Data":"0f460e944f8cae71b688b2d2ee59ef0a9cbefb2d1ee12a29d10b0e8a27c93a22"} Jan 29 09:10:07 crc kubenswrapper[4771]: I0129 09:10:07.414109 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2ad033e-b754-451b-a261-bdc556aaeaf5","Type":"ContainerStarted","Data":"5715e1326d5bb284f5633e27e45739cad5adf399f5cb61274771d96356575da4"} Jan 29 09:10:07 crc kubenswrapper[4771]: E0129 09:10:07.414755 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-b5q79" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" Jan 29 09:10:07 crc kubenswrapper[4771]: E0129 09:10:07.415329 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qs84" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" Jan 29 09:10:07 crc kubenswrapper[4771]: I0129 09:10:07.498141 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=14.498121784 podStartE2EDuration="14.498121784s" podCreationTimestamp="2026-01-29 09:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:10:07.494606503 +0000 UTC m=+227.617446740" watchObservedRunningTime="2026-01-29 09:10:07.498121784 +0000 UTC m=+227.620962011" Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.420323 4771 generic.go:334] "Generic (PLEG): container finished" podID="bf879871-35a5-4d77-b71c-672c2f524993" containerID="518e7542100d2c99babcd1482df0d3b25ba44a7af83d11194f82c993f2eebd77" exitCode=0 Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.420407 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxfcv" event={"ID":"bf879871-35a5-4d77-b71c-672c2f524993","Type":"ContainerDied","Data":"518e7542100d2c99babcd1482df0d3b25ba44a7af83d11194f82c993f2eebd77"} Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.717546 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.741434 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3f232a-1230-455c-bbb7-2050689742d1-kube-api-access\") pod \"ab3f232a-1230-455c-bbb7-2050689742d1\" (UID: \"ab3f232a-1230-455c-bbb7-2050689742d1\") " Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.741515 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f232a-1230-455c-bbb7-2050689742d1-kubelet-dir\") pod \"ab3f232a-1230-455c-bbb7-2050689742d1\" (UID: \"ab3f232a-1230-455c-bbb7-2050689742d1\") " Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.741741 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3f232a-1230-455c-bbb7-2050689742d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ab3f232a-1230-455c-bbb7-2050689742d1" (UID: "ab3f232a-1230-455c-bbb7-2050689742d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.741996 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f232a-1230-455c-bbb7-2050689742d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.749177 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3f232a-1230-455c-bbb7-2050689742d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ab3f232a-1230-455c-bbb7-2050689742d1" (UID: "ab3f232a-1230-455c-bbb7-2050689742d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:10:08 crc kubenswrapper[4771]: I0129 09:10:08.843243 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3f232a-1230-455c-bbb7-2050689742d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:09 crc kubenswrapper[4771]: I0129 09:10:09.429907 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxfcv" event={"ID":"bf879871-35a5-4d77-b71c-672c2f524993","Type":"ContainerStarted","Data":"c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783"} Jan 29 09:10:09 crc kubenswrapper[4771]: I0129 09:10:09.432158 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ab3f232a-1230-455c-bbb7-2050689742d1","Type":"ContainerDied","Data":"9b9e28d15e0b4eab58320d620a4319039b8aea605f8539dc2c921c79cc9a431e"} Jan 29 09:10:09 crc kubenswrapper[4771]: I0129 09:10:09.432202 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b9e28d15e0b4eab58320d620a4319039b8aea605f8539dc2c921c79cc9a431e" Jan 29 09:10:09 crc kubenswrapper[4771]: I0129 09:10:09.432206 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 09:10:09 crc kubenswrapper[4771]: I0129 09:10:09.454573 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rxfcv" podStartSLOduration=5.094317411 podStartE2EDuration="1m1.454519757s" podCreationTimestamp="2026-01-29 09:09:08 +0000 UTC" firstStartedPulling="2026-01-29 09:09:12.564097189 +0000 UTC m=+172.686937416" lastFinishedPulling="2026-01-29 09:10:08.924299515 +0000 UTC m=+229.047139762" observedRunningTime="2026-01-29 09:10:09.450534832 +0000 UTC m=+229.573375079" watchObservedRunningTime="2026-01-29 09:10:09.454519757 +0000 UTC m=+229.577359974" Jan 29 09:10:09 crc kubenswrapper[4771]: I0129 09:10:09.759657 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:10:09 crc kubenswrapper[4771]: I0129 09:10:09.759765 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:10:10 crc kubenswrapper[4771]: I0129 09:10:10.395487 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:10:10 crc kubenswrapper[4771]: I0129 09:10:10.395550 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:10:10 crc kubenswrapper[4771]: I0129 09:10:10.396281 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxgbp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 09:10:10 crc kubenswrapper[4771]: I0129 09:10:10.396430 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kxgbp" podUID="a40a655e-56fc-4578-8dd9-6ae371433ea0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 09:10:10 crc kubenswrapper[4771]: I0129 09:10:10.915300 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rxfcv" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="registry-server" probeResult="failure" output=< Jan 29 09:10:10 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:10:10 crc kubenswrapper[4771]: > Jan 29 09:10:14 crc kubenswrapper[4771]: I0129 09:10:14.271286 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:10:14 crc kubenswrapper[4771]: I0129 09:10:14.272012 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:10:14 crc kubenswrapper[4771]: I0129 09:10:14.272050 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:10:14 crc kubenswrapper[4771]: I0129 09:10:14.272721 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:10:14 crc kubenswrapper[4771]: I0129 09:10:14.272788 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74" gracePeriod=600 Jan 29 09:10:14 crc kubenswrapper[4771]: I0129 09:10:14.463270 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74" exitCode=0 Jan 29 09:10:14 crc kubenswrapper[4771]: I0129 09:10:14.463314 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74"} Jan 29 09:10:15 crc kubenswrapper[4771]: I0129 09:10:15.471223 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"7ca9b32e9e3f67b0d55af7b6b208d87b9e276a6c04ca2e9eb6a9b10aa7344d4f"} Jan 29 09:10:16 crc kubenswrapper[4771]: I0129 09:10:16.478977 4771 generic.go:334] "Generic (PLEG): container finished" podID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerID="584b58afaed6973119c456c25f4c31aefcd15bf7238e21c087d398b624f5c8ae" exitCode=0 Jan 29 09:10:16 crc kubenswrapper[4771]: I0129 09:10:16.479067 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwlvt" event={"ID":"8baa3171-ca7e-40a7-bd19-dfae944704fa","Type":"ContainerDied","Data":"584b58afaed6973119c456c25f4c31aefcd15bf7238e21c087d398b624f5c8ae"} Jan 29 09:10:17 crc kubenswrapper[4771]: I0129 09:10:17.496648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwlvt" event={"ID":"8baa3171-ca7e-40a7-bd19-dfae944704fa","Type":"ContainerStarted","Data":"743f49f20cab1ff2ffd97a6d95b9f4f5b89548fc45afb2c6e320244738d35ec2"} Jan 29 09:10:17 crc kubenswrapper[4771]: I0129 09:10:17.513948 4771 generic.go:334] "Generic (PLEG): container finished" podID="e6838739-1976-46a6-891d-e2a7ee919777" containerID="844efa433b50671edd80cb7060225fa105bf8a07c3dd620d8578f637342370b1" exitCode=0 Jan 29 09:10:17 crc kubenswrapper[4771]: I0129 09:10:17.514021 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sffpf" event={"ID":"e6838739-1976-46a6-891d-e2a7ee919777","Type":"ContainerDied","Data":"844efa433b50671edd80cb7060225fa105bf8a07c3dd620d8578f637342370b1"} Jan 29 09:10:17 crc kubenswrapper[4771]: I0129 09:10:17.526555 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hwlvt" podStartSLOduration=3.996170728 podStartE2EDuration="1m9.526535052s" podCreationTimestamp="2026-01-29 09:09:08 +0000 UTC" firstStartedPulling="2026-01-29 09:09:11.411314876 +0000 UTC m=+171.534155103" lastFinishedPulling="2026-01-29 09:10:16.9416792 +0000 UTC m=+237.064519427" observedRunningTime="2026-01-29 09:10:17.523107754 +0000 UTC m=+237.645947991" watchObservedRunningTime="2026-01-29 09:10:17.526535052 +0000 UTC m=+237.649375289" Jan 29 09:10:18 crc kubenswrapper[4771]: I0129 09:10:18.519357 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-478nk" event={"ID":"5d6ca13b-901b-4de9-bf79-494866c7ebdd","Type":"ContainerStarted","Data":"752b2577b67abf2391d1da2d125166dbb137d7ed002ef3c0831a607f7d543512"} Jan 29 09:10:18 crc kubenswrapper[4771]: I0129 09:10:18.522450 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sffpf" event={"ID":"e6838739-1976-46a6-891d-e2a7ee919777","Type":"ContainerStarted","Data":"3803be0b23694a01f9c882593f656e6f0b8919bf5d8df1577cb28f9c7f1a413d"} Jan 29 09:10:18 crc kubenswrapper[4771]: I0129 09:10:18.528519 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:10:18 crc kubenswrapper[4771]: I0129 09:10:18.528769 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:10:18 crc kubenswrapper[4771]: I0129 09:10:18.566990 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sffpf" podStartSLOduration=3.827931183 podStartE2EDuration="1m10.566965592s" podCreationTimestamp="2026-01-29 09:09:08 +0000 UTC" firstStartedPulling="2026-01-29 09:09:11.416867094 +0000 UTC m=+171.539707331" lastFinishedPulling="2026-01-29 09:10:18.155901513 +0000 UTC m=+238.278741740" observedRunningTime="2026-01-29 09:10:18.562071942 +0000 UTC m=+238.684912189" watchObservedRunningTime="2026-01-29 09:10:18.566965592 +0000 UTC m=+238.689805839" Jan 29 09:10:19 crc kubenswrapper[4771]: I0129 09:10:19.011596 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:10:19 crc kubenswrapper[4771]: I0129 09:10:19.011655 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:10:19 crc kubenswrapper[4771]: I0129 09:10:19.601787 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hwlvt" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="registry-server" probeResult="failure" output=< Jan 29 09:10:19 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:10:19 crc kubenswrapper[4771]: > Jan 29 09:10:19 crc kubenswrapper[4771]: I0129 09:10:19.989670 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:10:20 crc kubenswrapper[4771]: I0129 09:10:20.034856 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:10:20 crc kubenswrapper[4771]: I0129 09:10:20.069349 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sffpf" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="registry-server" probeResult="failure" output=< Jan 29 09:10:20 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:10:20 crc kubenswrapper[4771]: > Jan 29 09:10:20 crc kubenswrapper[4771]: I0129 09:10:20.405339 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-kxgbp" Jan 29 09:10:20 crc kubenswrapper[4771]: I0129 09:10:20.534966 4771 generic.go:334] "Generic (PLEG): container finished" podID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerID="752b2577b67abf2391d1da2d125166dbb137d7ed002ef3c0831a607f7d543512" exitCode=0 Jan 29 09:10:20 crc kubenswrapper[4771]: I0129 09:10:20.535022 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-478nk" event={"ID":"5d6ca13b-901b-4de9-bf79-494866c7ebdd","Type":"ContainerDied","Data":"752b2577b67abf2391d1da2d125166dbb137d7ed002ef3c0831a607f7d543512"} Jan 29 09:10:23 crc kubenswrapper[4771]: I0129 09:10:23.842595 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxfcv"] Jan 29 09:10:23 crc kubenswrapper[4771]: I0129 09:10:23.843341 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rxfcv" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="registry-server" containerID="cri-o://c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783" gracePeriod=2 Jan 29 09:10:25 crc kubenswrapper[4771]: I0129 09:10:25.563191 4771 generic.go:334] "Generic (PLEG): container finished" podID="bf879871-35a5-4d77-b71c-672c2f524993" containerID="c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783" exitCode=0 Jan 29 09:10:25 crc kubenswrapper[4771]: I0129 09:10:25.563275 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxfcv" event={"ID":"bf879871-35a5-4d77-b71c-672c2f524993","Type":"ContainerDied","Data":"c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783"} Jan 29 09:10:28 crc kubenswrapper[4771]: I0129 09:10:28.577488 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:10:28 crc kubenswrapper[4771]: I0129 09:10:28.619347 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:10:29 crc kubenswrapper[4771]: I0129 09:10:29.060118 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:10:29 crc kubenswrapper[4771]: I0129 09:10:29.199015 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:10:29 crc kubenswrapper[4771]: E0129 09:10:29.761205 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783 is running failed: container process not found" containerID="c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:10:29 crc kubenswrapper[4771]: E0129 09:10:29.762091 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783 is running failed: container process not found" containerID="c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:10:29 crc kubenswrapper[4771]: E0129 09:10:29.762651 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783 is running failed: container process not found" containerID="c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:10:29 crc kubenswrapper[4771]: E0129 09:10:29.762783 4771 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rxfcv" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="registry-server" Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.592785 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxfcv" event={"ID":"bf879871-35a5-4d77-b71c-672c2f524993","Type":"ContainerDied","Data":"97169004d7f3027a65f7424f4453d9295e78352e5bb76af9e56429f004cf5269"} Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.592887 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97169004d7f3027a65f7424f4453d9295e78352e5bb76af9e56429f004cf5269" Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.623474 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.678220 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-utilities\") pod \"bf879871-35a5-4d77-b71c-672c2f524993\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.679485 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-utilities" (OuterVolumeSpecName: "utilities") pod "bf879871-35a5-4d77-b71c-672c2f524993" (UID: "bf879871-35a5-4d77-b71c-672c2f524993"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.679650 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-catalog-content\") pod \"bf879871-35a5-4d77-b71c-672c2f524993\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.686202 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf879871-35a5-4d77-b71c-672c2f524993-kube-api-access-6vkf8" (OuterVolumeSpecName: "kube-api-access-6vkf8") pod "bf879871-35a5-4d77-b71c-672c2f524993" (UID: "bf879871-35a5-4d77-b71c-672c2f524993"). InnerVolumeSpecName "kube-api-access-6vkf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.679681 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vkf8\" (UniqueName: \"kubernetes.io/projected/bf879871-35a5-4d77-b71c-672c2f524993-kube-api-access-6vkf8\") pod \"bf879871-35a5-4d77-b71c-672c2f524993\" (UID: \"bf879871-35a5-4d77-b71c-672c2f524993\") " Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.689389 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vkf8\" (UniqueName: \"kubernetes.io/projected/bf879871-35a5-4d77-b71c-672c2f524993-kube-api-access-6vkf8\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.689418 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.736600 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf879871-35a5-4d77-b71c-672c2f524993" (UID: "bf879871-35a5-4d77-b71c-672c2f524993"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:10:30 crc kubenswrapper[4771]: I0129 09:10:30.790502 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf879871-35a5-4d77-b71c-672c2f524993-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:31 crc kubenswrapper[4771]: I0129 09:10:31.597665 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxfcv" Jan 29 09:10:31 crc kubenswrapper[4771]: I0129 09:10:31.619600 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxfcv"] Jan 29 09:10:31 crc kubenswrapper[4771]: I0129 09:10:31.622929 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rxfcv"] Jan 29 09:10:32 crc kubenswrapper[4771]: I0129 09:10:32.843983 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf879871-35a5-4d77-b71c-672c2f524993" path="/var/lib/kubelet/pods/bf879871-35a5-4d77-b71c-672c2f524993/volumes" Jan 29 09:10:36 crc kubenswrapper[4771]: I0129 09:10:36.623906 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7lqm" event={"ID":"d60426c7-faf6-4300-9ed0-160a76d81782","Type":"ContainerStarted","Data":"0cf04b054a7ca5ee75cdaa7278263de34c00ec7ec61929e1ce2240b691cad1d3"} Jan 29 09:10:36 crc kubenswrapper[4771]: I0129 09:10:36.626873 4771 generic.go:334] "Generic (PLEG): container finished" podID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerID="ad9d0e43cdf87bfffcf9cf91d9085c55b399713f712e07f809f33f52c0b9524b" exitCode=0 Jan 29 09:10:36 crc kubenswrapper[4771]: I0129 09:10:36.626919 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5q79" event={"ID":"fc279bbe-fdb9-4371-afaf-e2573ea04ce2","Type":"ContainerDied","Data":"ad9d0e43cdf87bfffcf9cf91d9085c55b399713f712e07f809f33f52c0b9524b"} Jan 29 09:10:36 crc kubenswrapper[4771]: I0129 09:10:36.630476 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vphmv" event={"ID":"d0799f09-12e7-42a2-90c6-c1fb70e5c67f","Type":"ContainerStarted","Data":"638eb06582b2313f0e3996a76eae2a3875ff5451dab8dbb4e2aaa6ec218f9a90"} Jan 29 09:10:36 crc kubenswrapper[4771]: I0129 09:10:36.632546 4771 generic.go:334] "Generic (PLEG): container finished" podID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerID="5a24837ef2995d85ad5b43aa7da231c912ee375a2a23bdb2186d146f4a018bf4" exitCode=0 Jan 29 09:10:36 crc kubenswrapper[4771]: I0129 09:10:36.632611 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qs84" event={"ID":"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b","Type":"ContainerDied","Data":"5a24837ef2995d85ad5b43aa7da231c912ee375a2a23bdb2186d146f4a018bf4"} Jan 29 09:10:36 crc kubenswrapper[4771]: I0129 09:10:36.638944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-478nk" event={"ID":"5d6ca13b-901b-4de9-bf79-494866c7ebdd","Type":"ContainerStarted","Data":"f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245"} Jan 29 09:10:36 crc kubenswrapper[4771]: I0129 09:10:36.702973 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-478nk" podStartSLOduration=4.028663717 podStartE2EDuration="1m25.702952537s" podCreationTimestamp="2026-01-29 09:09:11 +0000 UTC" firstStartedPulling="2026-01-29 09:09:13.82091015 +0000 UTC m=+173.943750377" lastFinishedPulling="2026-01-29 09:10:35.49519898 +0000 UTC m=+255.618039197" observedRunningTime="2026-01-29 09:10:36.700106966 +0000 UTC m=+256.822947203" watchObservedRunningTime="2026-01-29 09:10:36.702952537 +0000 UTC m=+256.825792754" Jan 29 09:10:37 crc kubenswrapper[4771]: I0129 09:10:37.648395 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qs84" event={"ID":"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b","Type":"ContainerStarted","Data":"cb1186317a8eed31b42343a2a2ba52870993be1470d9a47b5dda8f57dbed1bc6"} Jan 29 09:10:37 crc kubenswrapper[4771]: I0129 09:10:37.650988 4771 generic.go:334] "Generic (PLEG): container finished" podID="d60426c7-faf6-4300-9ed0-160a76d81782" containerID="0cf04b054a7ca5ee75cdaa7278263de34c00ec7ec61929e1ce2240b691cad1d3" exitCode=0 Jan 29 09:10:37 crc kubenswrapper[4771]: I0129 09:10:37.651057 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7lqm" event={"ID":"d60426c7-faf6-4300-9ed0-160a76d81782","Type":"ContainerDied","Data":"0cf04b054a7ca5ee75cdaa7278263de34c00ec7ec61929e1ce2240b691cad1d3"} Jan 29 09:10:37 crc kubenswrapper[4771]: I0129 09:10:37.656615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5q79" event={"ID":"fc279bbe-fdb9-4371-afaf-e2573ea04ce2","Type":"ContainerStarted","Data":"f809be2b03928edc8459163d7c774c445b990f733b068771149c872d0f0db5e3"} Jan 29 09:10:37 crc kubenswrapper[4771]: I0129 09:10:37.660157 4771 generic.go:334] "Generic (PLEG): container finished" podID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerID="638eb06582b2313f0e3996a76eae2a3875ff5451dab8dbb4e2aaa6ec218f9a90" exitCode=0 Jan 29 09:10:37 crc kubenswrapper[4771]: I0129 09:10:37.660226 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vphmv" event={"ID":"d0799f09-12e7-42a2-90c6-c1fb70e5c67f","Type":"ContainerDied","Data":"638eb06582b2313f0e3996a76eae2a3875ff5451dab8dbb4e2aaa6ec218f9a90"} Jan 29 09:10:37 crc kubenswrapper[4771]: I0129 09:10:37.677813 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4qs84" podStartSLOduration=3.03699731 podStartE2EDuration="1m27.677791962s" podCreationTimestamp="2026-01-29 09:09:10 +0000 UTC" firstStartedPulling="2026-01-29 09:09:12.577782857 +0000 UTC m=+172.700623084" lastFinishedPulling="2026-01-29 09:10:37.218577509 +0000 UTC m=+257.341417736" observedRunningTime="2026-01-29 09:10:37.67564913 +0000 UTC m=+257.798489357" watchObservedRunningTime="2026-01-29 09:10:37.677791962 +0000 UTC m=+257.800632189" Jan 29 09:10:37 crc kubenswrapper[4771]: I0129 09:10:37.706438 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b5q79" podStartSLOduration=3.239559089 podStartE2EDuration="1m27.706417154s" podCreationTimestamp="2026-01-29 09:09:10 +0000 UTC" firstStartedPulling="2026-01-29 09:09:12.652111643 +0000 UTC m=+172.774951880" lastFinishedPulling="2026-01-29 09:10:37.118969718 +0000 UTC m=+257.241809945" observedRunningTime="2026-01-29 09:10:37.701612606 +0000 UTC m=+257.824452853" watchObservedRunningTime="2026-01-29 09:10:37.706417154 +0000 UTC m=+257.829257381" Jan 29 09:10:38 crc kubenswrapper[4771]: I0129 09:10:38.670534 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vphmv" event={"ID":"d0799f09-12e7-42a2-90c6-c1fb70e5c67f","Type":"ContainerStarted","Data":"0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283"} Jan 29 09:10:38 crc kubenswrapper[4771]: I0129 09:10:38.698638 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vphmv" podStartSLOduration=3.06164174 podStartE2EDuration="1m27.698615159s" podCreationTimestamp="2026-01-29 09:09:11 +0000 UTC" firstStartedPulling="2026-01-29 09:09:13.810639729 +0000 UTC m=+173.933479956" lastFinishedPulling="2026-01-29 09:10:38.447613148 +0000 UTC m=+258.570453375" observedRunningTime="2026-01-29 09:10:38.696376554 +0000 UTC m=+258.819216781" watchObservedRunningTime="2026-01-29 09:10:38.698615159 +0000 UTC m=+258.821455386" Jan 29 09:10:39 crc kubenswrapper[4771]: I0129 09:10:39.339613 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nn2z2"] Jan 29 09:10:39 crc kubenswrapper[4771]: I0129 09:10:39.678471 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7lqm" event={"ID":"d60426c7-faf6-4300-9ed0-160a76d81782","Type":"ContainerStarted","Data":"36dcb4d47bc3146cd41860f4909a0eebc7c531a90da4f133d90bd338936e7957"} Jan 29 09:10:39 crc kubenswrapper[4771]: I0129 09:10:39.698090 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j7lqm" podStartSLOduration=4.465714493 podStartE2EDuration="1m31.698068982s" podCreationTimestamp="2026-01-29 09:09:08 +0000 UTC" firstStartedPulling="2026-01-29 09:09:11.423741438 +0000 UTC m=+171.546581665" lastFinishedPulling="2026-01-29 09:10:38.656095927 +0000 UTC m=+258.778936154" observedRunningTime="2026-01-29 09:10:39.697480575 +0000 UTC m=+259.820320802" watchObservedRunningTime="2026-01-29 09:10:39.698068982 +0000 UTC m=+259.820909209" Jan 29 09:10:40 crc kubenswrapper[4771]: I0129 09:10:40.616883 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:10:40 crc kubenswrapper[4771]: I0129 09:10:40.617340 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:10:40 crc kubenswrapper[4771]: I0129 09:10:40.670666 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:10:41 crc kubenswrapper[4771]: I0129 09:10:41.019566 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:10:41 crc kubenswrapper[4771]: I0129 09:10:41.019645 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:10:41 crc kubenswrapper[4771]: I0129 09:10:41.063216 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:10:41 crc kubenswrapper[4771]: I0129 09:10:41.994618 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:10:41 crc kubenswrapper[4771]: I0129 09:10:41.996409 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:10:42 crc kubenswrapper[4771]: I0129 09:10:42.159865 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:10:42 crc kubenswrapper[4771]: I0129 09:10:42.159955 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:10:42 crc kubenswrapper[4771]: I0129 09:10:42.750467 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:10:43 crc kubenswrapper[4771]: I0129 09:10:43.036369 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vphmv" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="registry-server" probeResult="failure" output=< Jan 29 09:10:43 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:10:43 crc kubenswrapper[4771]: > Jan 29 09:10:43 crc kubenswrapper[4771]: I0129 09:10:43.207383 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-478nk" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="registry-server" probeResult="failure" output=< Jan 29 09:10:43 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:10:43 crc kubenswrapper[4771]: > Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.212219 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.212848 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="extract-utilities" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.212865 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="extract-utilities" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.212879 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3f232a-1230-455c-bbb7-2050689742d1" containerName="pruner" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.212886 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3f232a-1230-455c-bbb7-2050689742d1" containerName="pruner" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.212901 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="extract-content" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.212909 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="extract-content" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.212936 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="registry-server" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.212943 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="registry-server" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.213087 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf879871-35a5-4d77-b71c-672c2f524993" containerName="registry-server" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.213100 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3f232a-1230-455c-bbb7-2050689742d1" containerName="pruner" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.213606 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.215588 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.215678 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.216041 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216066 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.216079 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216089 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.216107 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216115 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.216130 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216139 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.216150 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216159 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.216167 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216175 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216306 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216332 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216344 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216354 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216371 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.216505 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216515 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216628 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.216964 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9" gracePeriod=15 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.217048 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2" gracePeriod=15 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.217051 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65" gracePeriod=15 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.217114 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f" gracePeriod=15 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.217117 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa" gracePeriod=15 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.254769 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.282813 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.282870 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.282943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.282998 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.283031 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.283053 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.283193 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.283290 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.384976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385257 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385397 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385304 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385438 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385536 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385655 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385795 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385843 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.385870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.386010 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.386048 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.386078 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.386109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.386017 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.552494 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:10:44 crc kubenswrapper[4771]: W0129 09:10:44.579234 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d07ab6ef2dc8d877b9b6c037c87b1f258ff8008f38c0ad64fdf17484cdbecd1e WatchSource:0}: Error finding container d07ab6ef2dc8d877b9b6c037c87b1f258ff8008f38c0ad64fdf17484cdbecd1e: Status 404 returned error can't find the container with id d07ab6ef2dc8d877b9b6c037c87b1f258ff8008f38c0ad64fdf17484cdbecd1e Jan 29 09:10:44 crc kubenswrapper[4771]: E0129 09:10:44.585304 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f289f9d37c07e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 09:10:44.582482046 +0000 UTC m=+264.705322273,LastTimestamp:2026-01-29 09:10:44.582482046 +0000 UTC m=+264.705322273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.708502 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d07ab6ef2dc8d877b9b6c037c87b1f258ff8008f38c0ad64fdf17484cdbecd1e"} Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.716642 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.718239 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.719447 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f" exitCode=0 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.719480 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65" exitCode=0 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.719491 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2" exitCode=0 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.719502 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa" exitCode=2 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.719577 4771 scope.go:117] "RemoveContainer" containerID="a5d80c59f1e4154abad11ad1e27e1487d08803ecce16d6412c43bb863f3431d2" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.722194 4771 generic.go:334] "Generic (PLEG): container finished" podID="e2ad033e-b754-451b-a261-bdc556aaeaf5" containerID="5715e1326d5bb284f5633e27e45739cad5adf399f5cb61274771d96356575da4" exitCode=0 Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.722235 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2ad033e-b754-451b-a261-bdc556aaeaf5","Type":"ContainerDied","Data":"5715e1326d5bb284f5633e27e45739cad5adf399f5cb61274771d96356575da4"} Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.722982 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.723356 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:44 crc kubenswrapper[4771]: I0129 09:10:44.723617 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:45 crc kubenswrapper[4771]: E0129 09:10:45.290257 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f289f9d37c07e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 09:10:44.582482046 +0000 UTC m=+264.705322273,LastTimestamp:2026-01-29 09:10:44.582482046 +0000 UTC m=+264.705322273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 09:10:45 crc kubenswrapper[4771]: I0129 09:10:45.728293 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d"} Jan 29 09:10:45 crc kubenswrapper[4771]: I0129 09:10:45.729179 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:45 crc kubenswrapper[4771]: I0129 09:10:45.729365 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:45 crc kubenswrapper[4771]: I0129 09:10:45.731989 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 09:10:45 crc kubenswrapper[4771]: I0129 09:10:45.978926 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:10:45 crc kubenswrapper[4771]: I0129 09:10:45.979928 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:45 crc kubenswrapper[4771]: I0129 09:10:45.980227 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.111953 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-kubelet-dir\") pod \"e2ad033e-b754-451b-a261-bdc556aaeaf5\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.112031 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ad033e-b754-451b-a261-bdc556aaeaf5-kube-api-access\") pod \"e2ad033e-b754-451b-a261-bdc556aaeaf5\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.112120 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-var-lock\") pod \"e2ad033e-b754-451b-a261-bdc556aaeaf5\" (UID: \"e2ad033e-b754-451b-a261-bdc556aaeaf5\") " Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.112148 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2ad033e-b754-451b-a261-bdc556aaeaf5" (UID: "e2ad033e-b754-451b-a261-bdc556aaeaf5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.112259 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-var-lock" (OuterVolumeSpecName: "var-lock") pod "e2ad033e-b754-451b-a261-bdc556aaeaf5" (UID: "e2ad033e-b754-451b-a261-bdc556aaeaf5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.112499 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.112521 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ad033e-b754-451b-a261-bdc556aaeaf5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.120146 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ad033e-b754-451b-a261-bdc556aaeaf5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2ad033e-b754-451b-a261-bdc556aaeaf5" (UID: "e2ad033e-b754-451b-a261-bdc556aaeaf5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.214410 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ad033e-b754-451b-a261-bdc556aaeaf5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.741495 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.742987 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9" exitCode=0 Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.744833 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2ad033e-b754-451b-a261-bdc556aaeaf5","Type":"ContainerDied","Data":"aa758536a20518b9cfb399dffb331ea2ac24404c513fd20c2c72523a50b865a2"} Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.744860 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.744871 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa758536a20518b9cfb399dffb331ea2ac24404c513fd20c2c72523a50b865a2" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.757372 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:46 crc kubenswrapper[4771]: I0129 09:10:46.757964 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.248155 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.250083 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.250945 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.251830 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.252395 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331472 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331529 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331592 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331606 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331674 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331720 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331892 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331905 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.331914 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.754043 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.755157 4771 scope.go:117] "RemoveContainer" containerID="7d76054cbbec24c6c570a5de9b15ab1eea7e9db5bb73599ef763bc68fadcdf3f" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.755230 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.771444 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.771665 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.771853 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.775313 4771 scope.go:117] "RemoveContainer" containerID="9096ea2fa731f9d282642f6170e8d97f5ddf631928b5c2983238bba499ccbf65" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.791406 4771 scope.go:117] "RemoveContainer" containerID="9967b64c3b673bb21eebeab43ab6cd956ae5f39e9a1b76ed994d92347aad8fe2" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.812075 4771 scope.go:117] "RemoveContainer" containerID="f8c1f247623360f754217bad25c93ed0c07d1325251a3c63b3b8deaeeb038bfa" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.841251 4771 scope.go:117] "RemoveContainer" containerID="0c33b5d2ac50dff2525bf95047fdc2c964f1a0dc7ac24ba41c011d12c28db9a9" Jan 29 09:10:47 crc kubenswrapper[4771]: I0129 09:10:47.872168 4771 scope.go:117] "RemoveContainer" containerID="9bf8b8ae7c4ad108dfaa009c87f3cf45ae403b387e715d3710ae5a589ef4e31a" Jan 29 09:10:48 crc kubenswrapper[4771]: I0129 09:10:48.846619 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 09:10:48 crc kubenswrapper[4771]: E0129 09:10:48.915477 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:48 crc kubenswrapper[4771]: E0129 09:10:48.915712 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:48 crc kubenswrapper[4771]: E0129 09:10:48.915878 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:48 crc kubenswrapper[4771]: E0129 09:10:48.916066 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:48 crc kubenswrapper[4771]: E0129 09:10:48.916235 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:48 crc kubenswrapper[4771]: I0129 09:10:48.916262 4771 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 09:10:48 crc kubenswrapper[4771]: E0129 09:10:48.916413 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="200ms" Jan 29 09:10:49 crc kubenswrapper[4771]: E0129 09:10:49.117773 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="400ms" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.465777 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.465969 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.515734 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.516450 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.516964 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.517347 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:49 crc kubenswrapper[4771]: E0129 09:10:49.518651 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="800ms" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.815790 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.816309 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.816565 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:49 crc kubenswrapper[4771]: I0129 09:10:49.816872 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:50 crc kubenswrapper[4771]: E0129 09:10:50.320107 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="1.6s" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.665729 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.666906 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.667432 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.667889 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.668223 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.841291 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.841786 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.842197 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:50 crc kubenswrapper[4771]: I0129 09:10:50.842602 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:51 crc kubenswrapper[4771]: E0129 09:10:51.921194 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="3.2s" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.031116 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.032052 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.032514 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.033062 4771 status_manager.go:851] "Failed to get status for pod" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" pod="openshift-marketplace/redhat-operators-vphmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vphmv\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.033476 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.033775 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.068883 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.069375 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.070352 4771 status_manager.go:851] "Failed to get status for pod" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" pod="openshift-marketplace/redhat-operators-vphmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vphmv\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.071015 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.071750 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.072292 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.200295 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.201211 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.201618 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.202283 4771 status_manager.go:851] "Failed to get status for pod" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" pod="openshift-marketplace/redhat-operators-vphmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vphmv\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.202969 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.203252 4771 status_manager.go:851] "Failed to get status for pod" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" pod="openshift-marketplace/redhat-operators-478nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-478nk\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.203564 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.236594 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.237350 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.238017 4771 status_manager.go:851] "Failed to get status for pod" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" pod="openshift-marketplace/redhat-operators-478nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-478nk\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.238325 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.238770 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.239098 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:52 crc kubenswrapper[4771]: I0129 09:10:52.239297 4771 status_manager.go:851] "Failed to get status for pod" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" pod="openshift-marketplace/redhat-operators-vphmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vphmv\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:53 crc kubenswrapper[4771]: I0129 09:10:53.620535 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:10:53 crc kubenswrapper[4771]: I0129 09:10:53.621061 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:10:53 crc kubenswrapper[4771]: I0129 09:10:53.621104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:10:53 crc kubenswrapper[4771]: I0129 09:10:53.621164 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:10:53 crc kubenswrapper[4771]: W0129 09:10:53.621729 4771 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27185": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:53 crc kubenswrapper[4771]: E0129 09:10:53.621812 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27185\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:53 crc kubenswrapper[4771]: W0129 09:10:53.621826 4771 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27197": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:53 crc kubenswrapper[4771]: E0129 09:10:53.621973 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27197\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:53 crc kubenswrapper[4771]: W0129 09:10:53.622203 4771 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27185": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:53 crc kubenswrapper[4771]: E0129 09:10:53.622338 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27185\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.621497 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.621532 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.621559 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.621497 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.621633 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:12:56.621612406 +0000 UTC m=+396.744452633 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.621762 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 09:12:56.621733389 +0000 UTC m=+396.744573646 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:54 crc kubenswrapper[4771]: W0129 09:10:54.622112 4771 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27185": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.622180 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27185\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.862258 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 09:10:54 crc kubenswrapper[4771]: E0129 09:10:54.869410 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.122914 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.98:6443: connect: connection refused" interval="6.4s" Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.291571 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f289f9d37c07e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 09:10:44.582482046 +0000 UTC m=+264.705322273,LastTimestamp:2026-01-29 09:10:44.582482046 +0000 UTC m=+264.705322273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 09:10:55 crc kubenswrapper[4771]: W0129 09:10:55.508052 4771 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27185": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.508156 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27185\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:55 crc kubenswrapper[4771]: W0129 09:10:55.586950 4771 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27197": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.587042 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27197\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.622389 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.622457 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.622482 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.622552 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 09:12:57.622526995 +0000 UTC m=+397.745367222 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.622553 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.622661 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 09:12:57.622631739 +0000 UTC m=+397.745471956 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Jan 29 09:10:55 crc kubenswrapper[4771]: E0129 09:10:55.853951 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 09:10:56 crc kubenswrapper[4771]: W0129 09:10:56.332361 4771 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27185": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:56 crc kubenswrapper[4771]: E0129 09:10:56.332461 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27185\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:57 crc kubenswrapper[4771]: W0129 09:10:57.794882 4771 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27185": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:57 crc kubenswrapper[4771]: E0129 09:10:57.794969 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27185\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.820805 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.820856 4771 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb" exitCode=1 Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.820888 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb"} Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.821336 4771 scope.go:117] "RemoveContainer" containerID="7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.821868 4771 status_manager.go:851] "Failed to get status for pod" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" pod="openshift-marketplace/redhat-operators-478nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-478nk\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.822300 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.822554 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.823520 4771 status_manager.go:851] "Failed to get status for pod" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" pod="openshift-marketplace/redhat-operators-vphmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vphmv\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.823844 4771 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.824232 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:57 crc kubenswrapper[4771]: I0129 09:10:57.824710 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.699216 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.829842 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.829918 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa792c00195c36f3d7ecef96f2d086a1446ee8524bb391442f7c7a40bd68baff"} Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.830823 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.831211 4771 status_manager.go:851] "Failed to get status for pod" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" pod="openshift-marketplace/redhat-operators-vphmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vphmv\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.831556 4771 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.831905 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.832159 4771 status_manager.go:851] "Failed to get status for pod" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" pod="openshift-marketplace/redhat-operators-478nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-478nk\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.832412 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.832682 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.838030 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.838556 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.838996 4771 status_manager.go:851] "Failed to get status for pod" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" pod="openshift-marketplace/redhat-operators-vphmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vphmv\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.841129 4771 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.841359 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.841724 4771 status_manager.go:851] "Failed to get status for pod" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" pod="openshift-marketplace/redhat-operators-478nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-478nk\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.841978 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.842235 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.852711 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.852748 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:10:58 crc kubenswrapper[4771]: E0129 09:10:58.853204 4771 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:58 crc kubenswrapper[4771]: I0129 09:10:58.854042 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:58 crc kubenswrapper[4771]: W0129 09:10:58.874286 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-0c68c7b14e51797d798161707a46a3e69056ead34479f2779d9deb11873a0ce4 WatchSource:0}: Error finding container 0c68c7b14e51797d798161707a46a3e69056ead34479f2779d9deb11873a0ce4: Status 404 returned error can't find the container with id 0c68c7b14e51797d798161707a46a3e69056ead34479f2779d9deb11873a0ce4 Jan 29 09:10:59 crc kubenswrapper[4771]: W0129 09:10:59.672679 4771 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27197": dial tcp 38.129.56.98:6443: connect: connection refused Jan 29 09:10:59 crc kubenswrapper[4771]: E0129 09:10:59.673170 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27197\": dial tcp 38.129.56.98:6443: connect: connection refused" logger="UnhandledError" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.842653 4771 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b5d628d8be58bc86dd0e235209f22f01573e69f38e2522955e779f164c55d6cd" exitCode=0 Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.842763 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b5d628d8be58bc86dd0e235209f22f01573e69f38e2522955e779f164c55d6cd"} Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.842817 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0c68c7b14e51797d798161707a46a3e69056ead34479f2779d9deb11873a0ce4"} Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.843111 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.843125 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:10:59 crc kubenswrapper[4771]: E0129 09:10:59.843969 4771 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.844015 4771 status_manager.go:851] "Failed to get status for pod" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" pod="openshift-marketplace/redhat-operators-vphmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vphmv\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.844522 4771 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.845099 4771 status_manager.go:851] "Failed to get status for pod" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" pod="openshift-marketplace/redhat-marketplace-b5q79" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-b5q79\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.845339 4771 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.845519 4771 status_manager.go:851] "Failed to get status for pod" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" pod="openshift-marketplace/redhat-operators-478nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-478nk\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.845723 4771 status_manager.go:851] "Failed to get status for pod" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" pod="openshift-marketplace/community-operators-j7lqm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j7lqm\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:10:59 crc kubenswrapper[4771]: I0129 09:10:59.846004 4771 status_manager.go:851] "Failed to get status for pod" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.98:6443: connect: connection refused" Jan 29 09:11:00 crc kubenswrapper[4771]: I0129 09:11:00.876539 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a26ceca4e80e165cc3d1879e285996c4ad98cb5493da3570ff2e404f5b63c867"} Jan 29 09:11:00 crc kubenswrapper[4771]: I0129 09:11:00.877280 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"096ab13471ad4df3d18820f520200d69afebcb1c206aae8bc71d947c6b3ff4aa"} Jan 29 09:11:00 crc kubenswrapper[4771]: I0129 09:11:00.877295 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e25345c38fd17512347803edc8f14c798b46032e0915bb648271d38cff7e5cbe"} Jan 29 09:11:00 crc kubenswrapper[4771]: I0129 09:11:00.877305 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"072fedcda137b47fd0b8f7d945eb449d4b63ed2915111a1ad4993711360c6d74"} Jan 29 09:11:01 crc kubenswrapper[4771]: I0129 09:11:01.222109 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:11:01 crc kubenswrapper[4771]: I0129 09:11:01.887609 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f5aadc6e5de6e9e68c4a57c948185331a0a75b6142a70bad89f208912c1d187"} Jan 29 09:11:01 crc kubenswrapper[4771]: I0129 09:11:01.888504 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:11:01 crc kubenswrapper[4771]: I0129 09:11:01.888664 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:11:01 crc kubenswrapper[4771]: I0129 09:11:01.888772 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:11:03 crc kubenswrapper[4771]: I0129 09:11:03.854324 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:11:03 crc kubenswrapper[4771]: I0129 09:11:03.854691 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:11:03 crc kubenswrapper[4771]: I0129 09:11:03.861390 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:11:03 crc kubenswrapper[4771]: I0129 09:11:03.929260 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.378411 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" podUID="30d901bc-be28-4ddc-b46f-05fffb35ec40" containerName="oauth-openshift" containerID="cri-o://69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a" gracePeriod=15 Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.830135 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.896065 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-cliconfig\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.896120 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-session\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.896146 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-service-ca\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.896174 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-error\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.896199 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-login\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.896222 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-provider-selection\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.896241 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-router-certs\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897046 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897050 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897154 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-policies\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897181 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-serving-cert\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-idp-0-file-data\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897224 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-trusted-ca-bundle\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897248 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kblxb\" (UniqueName: \"kubernetes.io/projected/30d901bc-be28-4ddc-b46f-05fffb35ec40-kube-api-access-kblxb\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897265 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-ocp-branding-template\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897279 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-dir\") pod \"30d901bc-be28-4ddc-b46f-05fffb35ec40\" (UID: \"30d901bc-be28-4ddc-b46f-05fffb35ec40\") " Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897505 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897517 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.897541 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.898080 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.898123 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.915121 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.915426 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d901bc-be28-4ddc-b46f-05fffb35ec40-kube-api-access-kblxb" (OuterVolumeSpecName: "kube-api-access-kblxb") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "kube-api-access-kblxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.916008 4771 generic.go:334] "Generic (PLEG): container finished" podID="30d901bc-be28-4ddc-b46f-05fffb35ec40" containerID="69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a" exitCode=0 Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.916048 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" event={"ID":"30d901bc-be28-4ddc-b46f-05fffb35ec40","Type":"ContainerDied","Data":"69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a"} Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.916085 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" event={"ID":"30d901bc-be28-4ddc-b46f-05fffb35ec40","Type":"ContainerDied","Data":"070c9b8208cc654be6c8f5a072b8a20e0c26a3810879608c664fb12dfd6ef4e0"} Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.916102 4771 scope.go:117] "RemoveContainer" containerID="69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.916102 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nn2z2" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.917110 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.918054 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.919419 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.919538 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.920053 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.920396 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.920553 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "30d901bc-be28-4ddc-b46f-05fffb35ec40" (UID: "30d901bc-be28-4ddc-b46f-05fffb35ec40"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.951762 4771 scope.go:117] "RemoveContainer" containerID="69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a" Jan 29 09:11:04 crc kubenswrapper[4771]: E0129 09:11:04.952309 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a\": container with ID starting with 69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a not found: ID does not exist" containerID="69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.952366 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a"} err="failed to get container status \"69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a\": rpc error: code = NotFound desc = could not find container \"69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a\": container with ID starting with 69a0a6c58d681b50ffbdecd6dd49447f36b5c009e4e0f6eb1058038af4df880a not found: ID does not exist" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999386 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999423 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999435 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999448 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999462 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999472 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999483 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999494 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999503 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999517 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kblxb\" (UniqueName: \"kubernetes.io/projected/30d901bc-be28-4ddc-b46f-05fffb35ec40-kube-api-access-kblxb\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:04 crc kubenswrapper[4771]: I0129 09:11:04.999528 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30d901bc-be28-4ddc-b46f-05fffb35ec40-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:05 crc kubenswrapper[4771]: I0129 09:11:04.999538 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d901bc-be28-4ddc-b46f-05fffb35ec40-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.587186 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.837618 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.838446 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.893276 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.907949 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.929264 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.929295 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.933314 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:11:06 crc kubenswrapper[4771]: I0129 09:11:06.936500 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="52fce257-fc1b-4ca1-99c1-0572863a21b5" Jan 29 09:11:07 crc kubenswrapper[4771]: I0129 09:11:07.039848 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 09:11:07 crc kubenswrapper[4771]: E0129 09:11:07.042528 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 29 09:11:07 crc kubenswrapper[4771]: E0129 09:11:07.193242 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 29 09:11:07 crc kubenswrapper[4771]: E0129 09:11:07.453375 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 29 09:11:07 crc kubenswrapper[4771]: E0129 09:11:07.729149 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 29 09:11:07 crc kubenswrapper[4771]: I0129 09:11:07.934191 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:11:07 crc kubenswrapper[4771]: I0129 09:11:07.934218 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:11:08 crc kubenswrapper[4771]: I0129 09:11:08.698985 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:11:08 crc kubenswrapper[4771]: I0129 09:11:08.699229 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 09:11:08 crc kubenswrapper[4771]: I0129 09:11:08.699288 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 09:11:08 crc kubenswrapper[4771]: I0129 09:11:08.837679 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:11:10 crc kubenswrapper[4771]: I0129 09:11:10.877317 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="52fce257-fc1b-4ca1-99c1-0572863a21b5" Jan 29 09:11:16 crc kubenswrapper[4771]: I0129 09:11:16.614709 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 09:11:16 crc kubenswrapper[4771]: I0129 09:11:16.653010 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 09:11:16 crc kubenswrapper[4771]: I0129 09:11:16.910312 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 09:11:17 crc kubenswrapper[4771]: I0129 09:11:17.142516 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 09:11:17 crc kubenswrapper[4771]: I0129 09:11:17.372563 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 09:11:17 crc kubenswrapper[4771]: I0129 09:11:17.597128 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 09:11:17 crc kubenswrapper[4771]: I0129 09:11:17.811110 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 09:11:17 crc kubenswrapper[4771]: I0129 09:11:17.864854 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.035568 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.050076 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.112462 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.119582 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.322113 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.328893 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.417437 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.523752 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.594263 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.634461 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.699813 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.699928 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.909901 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 09:11:18 crc kubenswrapper[4771]: I0129 09:11:18.923269 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.166577 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.285623 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.362281 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.444583 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.576278 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.589746 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.693445 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.698963 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.747502 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.769075 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.817136 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.881993 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.912441 4771 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.914591 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=35.914567318 podStartE2EDuration="35.914567318s" podCreationTimestamp="2026-01-29 09:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:11:06.753455251 +0000 UTC m=+286.876295478" watchObservedRunningTime="2026-01-29 09:11:19.914567318 +0000 UTC m=+300.037407535" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.919244 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nn2z2","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.919348 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-d787499bd-g75vd"] Jan 29 09:11:19 crc kubenswrapper[4771]: E0129 09:11:19.919637 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d901bc-be28-4ddc-b46f-05fffb35ec40" containerName="oauth-openshift" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.919660 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d901bc-be28-4ddc-b46f-05fffb35ec40" containerName="oauth-openshift" Jan 29 09:11:19 crc kubenswrapper[4771]: E0129 09:11:19.919675 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" containerName="installer" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.919685 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" containerName="installer" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.919967 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.919999 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f2dade0a-90bd-47af-b039-da60ecbc514a" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.920280 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ad033e-b754-451b-a261-bdc556aaeaf5" containerName="installer" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.920340 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d901bc-be28-4ddc-b46f-05fffb35ec40" containerName="oauth-openshift" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.921088 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.923767 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.927518 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.927572 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.927783 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.927963 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.937285 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.937562 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.937638 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.937575 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.937824 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.938325 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.940212 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.946945 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.961396 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.962111 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.966726 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 09:11:19 crc kubenswrapper[4771]: I0129 09:11:19.972972 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.9729518 podStartE2EDuration="13.9729518s" podCreationTimestamp="2026-01-29 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:11:19.968473858 +0000 UTC m=+300.091314085" watchObservedRunningTime="2026-01-29 09:11:19.9729518 +0000 UTC m=+300.095792027" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.007769 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016537 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016590 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016654 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016686 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-session\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016753 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016778 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e28ad013-a64e-4852-a6ad-9d1d05210fd5-audit-dir\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016800 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-error\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016826 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016860 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016893 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016929 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm787\" (UniqueName: \"kubernetes.io/projected/e28ad013-a64e-4852-a6ad-9d1d05210fd5-kube-api-access-gm787\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.016983 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-login\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.017023 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-audit-policies\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.017084 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.068049 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.075491 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.118048 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.118855 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119054 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119166 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119293 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-session\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119468 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119586 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e28ad013-a64e-4852-a6ad-9d1d05210fd5-audit-dir\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119677 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-error\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119812 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119921 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.120023 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.120121 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119214 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.120135 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm787\" (UniqueName: \"kubernetes.io/projected/e28ad013-a64e-4852-a6ad-9d1d05210fd5-kube-api-access-gm787\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.119847 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e28ad013-a64e-4852-a6ad-9d1d05210fd5-audit-dir\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.120494 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-login\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.120616 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-audit-policies\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.121348 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.121369 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e28ad013-a64e-4852-a6ad-9d1d05210fd5-audit-policies\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.127294 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.129911 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.131826 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.131891 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.132206 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-error\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.132867 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.135665 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-user-template-login\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.138411 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e28ad013-a64e-4852-a6ad-9d1d05210fd5-v4-0-config-system-session\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.143329 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm787\" (UniqueName: \"kubernetes.io/projected/e28ad013-a64e-4852-a6ad-9d1d05210fd5-kube-api-access-gm787\") pod \"oauth-openshift-d787499bd-g75vd\" (UID: \"e28ad013-a64e-4852-a6ad-9d1d05210fd5\") " pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.175038 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.223638 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.622227 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.699836 4771 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.701773 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.702054 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.704928 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.706334 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.710161 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.710462 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.744591 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.758136 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.782881 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.786509 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.786618 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.846030 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d901bc-be28-4ddc-b46f-05fffb35ec40" path="/var/lib/kubelet/pods/30d901bc-be28-4ddc-b46f-05fffb35ec40/volumes" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.889781 4771 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.945007 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 09:11:20 crc kubenswrapper[4771]: I0129 09:11:20.986027 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.007662 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.049568 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.072914 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.106403 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.203717 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.313974 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.351579 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.414306 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.459177 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.626359 4771 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.806907 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 09:11:21 crc kubenswrapper[4771]: I0129 09:11:21.851056 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.008953 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.039990 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.129721 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.152588 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.206046 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.208627 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.305185 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.305543 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.344759 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.365055 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.383856 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.481073 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.560805 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.643123 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.652969 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.669427 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.770381 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.816806 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.841373 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.851812 4771 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.924054 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 09:11:22 crc kubenswrapper[4771]: I0129 09:11:22.954646 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.072012 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.096523 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.125862 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.202135 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.226266 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.266322 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.274768 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.437843 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.468047 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.504606 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.516107 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.579026 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.709903 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.799358 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.817959 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.905205 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.906052 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.922496 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 09:11:23 crc kubenswrapper[4771]: I0129 09:11:23.976513 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.094313 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.095927 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.123877 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.131739 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.153377 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.194036 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.327217 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.424206 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.630379 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.642140 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.671071 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.701813 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.714003 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.720837 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.745653 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.810852 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.811129 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.948116 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 09:11:24 crc kubenswrapper[4771]: I0129 09:11:24.998910 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.168602 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.177598 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.234834 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.249336 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.262437 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.270041 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.333994 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.380511 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.480612 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.481334 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.507085 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.524549 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.579464 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.633570 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.697601 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.795464 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.795793 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.820885 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.851564 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.910592 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.929513 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.946283 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 09:11:25 crc kubenswrapper[4771]: I0129 09:11:25.955586 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.007129 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.287121 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.298789 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.370272 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.505057 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.556801 4771 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.626881 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.663525 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.735062 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.736569 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.785016 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.795961 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.847895 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.849002 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.938706 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 09:11:26 crc kubenswrapper[4771]: I0129 09:11:26.947260 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.078271 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.122265 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.199734 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.280439 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.323159 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.329240 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.347723 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.478936 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.524986 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.623569 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.870011 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.955991 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 09:11:27 crc kubenswrapper[4771]: I0129 09:11:27.987182 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.201817 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.403768 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.485648 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.516923 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.692574 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.700168 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.700229 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.700283 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.701078 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"fa792c00195c36f3d7ecef96f2d086a1446ee8524bb391442f7c7a40bd68baff"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.701199 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://fa792c00195c36f3d7ecef96f2d086a1446ee8524bb391442f7c7a40bd68baff" gracePeriod=30 Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.733574 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.895873 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.963195 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 09:11:28 crc kubenswrapper[4771]: I0129 09:11:28.963310 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.008588 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.016196 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.065235 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.072743 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.150026 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.229080 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.243014 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.243082 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.243409 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d" gracePeriod=5 Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.262886 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.286261 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.351782 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.548460 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.560103 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.577096 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.647737 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.669275 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.692713 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.729668 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 09:11:29 crc kubenswrapper[4771]: I0129 09:11:29.818055 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d787499bd-g75vd"] Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.040723 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.132383 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.149110 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.224442 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.272179 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.284941 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d787499bd-g75vd"] Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.377766 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.450872 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.575842 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.685275 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.727013 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.743574 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.801852 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.817203 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.828355 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 09:11:30 crc kubenswrapper[4771]: I0129 09:11:30.904488 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 09:11:31 crc kubenswrapper[4771]: I0129 09:11:31.011589 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 09:11:31 crc kubenswrapper[4771]: I0129 09:11:31.085450 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" event={"ID":"e28ad013-a64e-4852-a6ad-9d1d05210fd5","Type":"ContainerStarted","Data":"7494f2579b6aeda41b5d1e5ce78abb643bcb9d012ee4c6fc6321053acec56cff"} Jan 29 09:11:31 crc kubenswrapper[4771]: I0129 09:11:31.215297 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 09:11:31 crc kubenswrapper[4771]: I0129 09:11:31.533518 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 09:11:31 crc kubenswrapper[4771]: I0129 09:11:31.605053 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 09:11:31 crc kubenswrapper[4771]: I0129 09:11:31.613410 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 09:11:31 crc kubenswrapper[4771]: I0129 09:11:31.633069 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 09:11:31 crc kubenswrapper[4771]: I0129 09:11:31.711529 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 09:11:32 crc kubenswrapper[4771]: I0129 09:11:32.009989 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 09:11:32 crc kubenswrapper[4771]: I0129 09:11:32.043572 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 09:11:32 crc kubenswrapper[4771]: I0129 09:11:32.093510 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" event={"ID":"e28ad013-a64e-4852-a6ad-9d1d05210fd5","Type":"ContainerStarted","Data":"090788121d84876b3c136a3868ff67068bb87d453f1f1a5cdb4eca80fb087ca3"} Jan 29 09:11:32 crc kubenswrapper[4771]: I0129 09:11:32.094145 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:32 crc kubenswrapper[4771]: I0129 09:11:32.099337 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" Jan 29 09:11:32 crc kubenswrapper[4771]: I0129 09:11:32.120009 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d787499bd-g75vd" podStartSLOduration=53.119990841 podStartE2EDuration="53.119990841s" podCreationTimestamp="2026-01-29 09:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:11:32.117299922 +0000 UTC m=+312.240140149" watchObservedRunningTime="2026-01-29 09:11:32.119990841 +0000 UTC m=+312.242831078" Jan 29 09:11:32 crc kubenswrapper[4771]: I0129 09:11:32.214265 4771 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 09:11:32 crc kubenswrapper[4771]: I0129 09:11:32.526359 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 09:11:33 crc kubenswrapper[4771]: I0129 09:11:33.256185 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.819074 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.819144 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.844227 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.854965 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.855001 4771 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5bd87968-d718-4604-83c9-4f3c06f061fa" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.858426 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.858475 4771 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5bd87968-d718-4604-83c9-4f3c06f061fa" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950173 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950215 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950240 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950296 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950411 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950410 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950559 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.950616 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.951149 4771 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.951338 4771 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.951361 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:34 crc kubenswrapper[4771]: I0129 09:11:34.958472 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:11:35 crc kubenswrapper[4771]: I0129 09:11:35.053019 4771 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:35 crc kubenswrapper[4771]: I0129 09:11:35.053087 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:35 crc kubenswrapper[4771]: I0129 09:11:35.109482 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 09:11:35 crc kubenswrapper[4771]: I0129 09:11:35.109550 4771 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d" exitCode=137 Jan 29 09:11:35 crc kubenswrapper[4771]: I0129 09:11:35.109602 4771 scope.go:117] "RemoveContainer" containerID="722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d" Jan 29 09:11:35 crc kubenswrapper[4771]: I0129 09:11:35.109639 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 09:11:35 crc kubenswrapper[4771]: I0129 09:11:35.129379 4771 scope.go:117] "RemoveContainer" containerID="722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d" Jan 29 09:11:35 crc kubenswrapper[4771]: E0129 09:11:35.129774 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d\": container with ID starting with 722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d not found: ID does not exist" containerID="722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d" Jan 29 09:11:35 crc kubenswrapper[4771]: I0129 09:11:35.129818 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d"} err="failed to get container status \"722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d\": rpc error: code = NotFound desc = could not find container \"722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d\": container with ID starting with 722f8f464652e348416d37051112f6de6a40759e2057a6e210005d24a726601d not found: ID does not exist" Jan 29 09:11:36 crc kubenswrapper[4771]: I0129 09:11:36.861714 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 09:11:53 crc kubenswrapper[4771]: I0129 09:11:53.045848 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 09:11:56 crc kubenswrapper[4771]: I0129 09:11:56.808457 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 09:11:59 crc kubenswrapper[4771]: I0129 09:11:59.276904 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 09:11:59 crc kubenswrapper[4771]: I0129 09:11:59.284365 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 09:11:59 crc kubenswrapper[4771]: I0129 09:11:59.284420 4771 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fa792c00195c36f3d7ecef96f2d086a1446ee8524bb391442f7c7a40bd68baff" exitCode=137 Jan 29 09:11:59 crc kubenswrapper[4771]: I0129 09:11:59.284465 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fa792c00195c36f3d7ecef96f2d086a1446ee8524bb391442f7c7a40bd68baff"} Jan 29 09:11:59 crc kubenswrapper[4771]: I0129 09:11:59.284502 4771 scope.go:117] "RemoveContainer" containerID="7a2528723b33af0877f0caae29012def6cdc6295b6757ed96f26b686a1efadbb" Jan 29 09:12:00 crc kubenswrapper[4771]: I0129 09:12:00.291128 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 09:12:00 crc kubenswrapper[4771]: I0129 09:12:00.292084 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c28b8630c05ccb3803c9577bcdc7dcc431958630e3bdb2952be8d6fb8549236e"} Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.221735 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.860418 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sffpf"] Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.860778 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sffpf" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="registry-server" containerID="cri-o://3803be0b23694a01f9c882593f656e6f0b8919bf5d8df1577cb28f9c7f1a413d" gracePeriod=30 Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.872001 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwlvt"] Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.872357 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hwlvt" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="registry-server" containerID="cri-o://743f49f20cab1ff2ffd97a6d95b9f4f5b89548fc45afb2c6e320244738d35ec2" gracePeriod=30 Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.881547 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j7lqm"] Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.881855 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j7lqm" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" containerName="registry-server" containerID="cri-o://36dcb4d47bc3146cd41860f4909a0eebc7c531a90da4f133d90bd338936e7957" gracePeriod=30 Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.886817 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqgdw"] Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.887068 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" podUID="f92b62e7-3351-4e65-a49d-49b6a6217796" containerName="marketplace-operator" containerID="cri-o://0a78765b621fa413b642e9e387c847b7457754220495cfe7fe72ea2816bf0cca" gracePeriod=30 Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.892337 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qs84"] Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.894545 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4qs84" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerName="registry-server" containerID="cri-o://cb1186317a8eed31b42343a2a2ba52870993be1470d9a47b5dda8f57dbed1bc6" gracePeriod=30 Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.904724 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5q79"] Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.905026 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b5q79" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerName="registry-server" containerID="cri-o://f809be2b03928edc8459163d7c774c445b990f733b068771149c872d0f0db5e3" gracePeriod=30 Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.914484 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-478nk"] Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.914847 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-478nk" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="registry-server" containerID="cri-o://f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245" gracePeriod=30 Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.922066 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vphmv"] Jan 29 09:12:01 crc kubenswrapper[4771]: I0129 09:12:01.922396 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vphmv" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="registry-server" containerID="cri-o://0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283" gracePeriod=30 Jan 29 09:12:01 crc kubenswrapper[4771]: E0129 09:12:01.995457 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283 is running failed: container process not found" containerID="0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:12:01 crc kubenswrapper[4771]: E0129 09:12:01.998023 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283 is running failed: container process not found" containerID="0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:12:01 crc kubenswrapper[4771]: E0129 09:12:01.998469 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283 is running failed: container process not found" containerID="0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:12:01 crc kubenswrapper[4771]: E0129 09:12:01.998525 4771 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-vphmv" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="registry-server" Jan 29 09:12:02 crc kubenswrapper[4771]: E0129 09:12:02.161499 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245 is running failed: container process not found" containerID="f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:12:02 crc kubenswrapper[4771]: E0129 09:12:02.161994 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245 is running failed: container process not found" containerID="f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:12:02 crc kubenswrapper[4771]: E0129 09:12:02.162639 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245 is running failed: container process not found" containerID="f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:12:02 crc kubenswrapper[4771]: E0129 09:12:02.162770 4771 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-478nk" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="registry-server" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.315193 4771 generic.go:334] "Generic (PLEG): container finished" podID="d60426c7-faf6-4300-9ed0-160a76d81782" containerID="36dcb4d47bc3146cd41860f4909a0eebc7c531a90da4f133d90bd338936e7957" exitCode=0 Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.315352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7lqm" event={"ID":"d60426c7-faf6-4300-9ed0-160a76d81782","Type":"ContainerDied","Data":"36dcb4d47bc3146cd41860f4909a0eebc7c531a90da4f133d90bd338936e7957"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.318428 4771 generic.go:334] "Generic (PLEG): container finished" podID="f92b62e7-3351-4e65-a49d-49b6a6217796" containerID="0a78765b621fa413b642e9e387c847b7457754220495cfe7fe72ea2816bf0cca" exitCode=0 Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.318529 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" event={"ID":"f92b62e7-3351-4e65-a49d-49b6a6217796","Type":"ContainerDied","Data":"0a78765b621fa413b642e9e387c847b7457754220495cfe7fe72ea2816bf0cca"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.318620 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" event={"ID":"f92b62e7-3351-4e65-a49d-49b6a6217796","Type":"ContainerDied","Data":"2fbc87d3afbeef33ef1a63cab07da35c49d9d74daab62b6e0d5dbbde020375f5"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.318634 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fbc87d3afbeef33ef1a63cab07da35c49d9d74daab62b6e0d5dbbde020375f5" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.323045 4771 generic.go:334] "Generic (PLEG): container finished" podID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerID="f809be2b03928edc8459163d7c774c445b990f733b068771149c872d0f0db5e3" exitCode=0 Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.323425 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5q79" event={"ID":"fc279bbe-fdb9-4371-afaf-e2573ea04ce2","Type":"ContainerDied","Data":"f809be2b03928edc8459163d7c774c445b990f733b068771149c872d0f0db5e3"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.329021 4771 generic.go:334] "Generic (PLEG): container finished" podID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerID="0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283" exitCode=0 Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.329133 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vphmv" event={"ID":"d0799f09-12e7-42a2-90c6-c1fb70e5c67f","Type":"ContainerDied","Data":"0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.333955 4771 generic.go:334] "Generic (PLEG): container finished" podID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerID="f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245" exitCode=0 Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.334016 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-478nk" event={"ID":"5d6ca13b-901b-4de9-bf79-494866c7ebdd","Type":"ContainerDied","Data":"f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.340532 4771 generic.go:334] "Generic (PLEG): container finished" podID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerID="cb1186317a8eed31b42343a2a2ba52870993be1470d9a47b5dda8f57dbed1bc6" exitCode=0 Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.340634 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qs84" event={"ID":"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b","Type":"ContainerDied","Data":"cb1186317a8eed31b42343a2a2ba52870993be1470d9a47b5dda8f57dbed1bc6"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.343578 4771 generic.go:334] "Generic (PLEG): container finished" podID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerID="743f49f20cab1ff2ffd97a6d95b9f4f5b89548fc45afb2c6e320244738d35ec2" exitCode=0 Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.343645 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwlvt" event={"ID":"8baa3171-ca7e-40a7-bd19-dfae944704fa","Type":"ContainerDied","Data":"743f49f20cab1ff2ffd97a6d95b9f4f5b89548fc45afb2c6e320244738d35ec2"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.350571 4771 generic.go:334] "Generic (PLEG): container finished" podID="e6838739-1976-46a6-891d-e2a7ee919777" containerID="3803be0b23694a01f9c882593f656e6f0b8919bf5d8df1577cb28f9c7f1a413d" exitCode=0 Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.350908 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sffpf" event={"ID":"e6838739-1976-46a6-891d-e2a7ee919777","Type":"ContainerDied","Data":"3803be0b23694a01f9c882593f656e6f0b8919bf5d8df1577cb28f9c7f1a413d"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.350996 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sffpf" event={"ID":"e6838739-1976-46a6-891d-e2a7ee919777","Type":"ContainerDied","Data":"6d1bc4ba9a136e0b8c53de1f9dc403094477e0265fde1b49bf4ce6c6a21912ad"} Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.351012 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1bc4ba9a136e0b8c53de1f9dc403094477e0265fde1b49bf4ce6c6a21912ad" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.533845 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.539011 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.545426 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.554949 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.564903 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.569845 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.662510 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-catalog-content\") pod \"e6838739-1976-46a6-891d-e2a7ee919777\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.662930 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phwm7\" (UniqueName: \"kubernetes.io/projected/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-kube-api-access-phwm7\") pod \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663084 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-catalog-content\") pod \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663169 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ztlz\" (UniqueName: \"kubernetes.io/projected/5d6ca13b-901b-4de9-bf79-494866c7ebdd-kube-api-access-8ztlz\") pod \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663254 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-trusted-ca\") pod \"f92b62e7-3351-4e65-a49d-49b6a6217796\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663364 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-utilities\") pod \"e6838739-1976-46a6-891d-e2a7ee919777\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663472 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-482lq\" (UniqueName: \"kubernetes.io/projected/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-kube-api-access-482lq\") pod \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663574 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-utilities\") pod \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663651 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdv5j\" (UniqueName: \"kubernetes.io/projected/f92b62e7-3351-4e65-a49d-49b6a6217796-kube-api-access-jdv5j\") pod \"f92b62e7-3351-4e65-a49d-49b6a6217796\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663747 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-catalog-content\") pod \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\" (UID: \"d0799f09-12e7-42a2-90c6-c1fb70e5c67f\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663843 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-catalog-content\") pod \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663925 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-utilities\") pod \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.663998 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-operator-metrics\") pod \"f92b62e7-3351-4e65-a49d-49b6a6217796\" (UID: \"f92b62e7-3351-4e65-a49d-49b6a6217796\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.664082 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf884\" (UniqueName: \"kubernetes.io/projected/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-kube-api-access-lf884\") pod \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.664195 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-utilities\") pod \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\" (UID: \"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.664301 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/e6838739-1976-46a6-891d-e2a7ee919777-kube-api-access-fpngr\") pod \"e6838739-1976-46a6-891d-e2a7ee919777\" (UID: \"e6838739-1976-46a6-891d-e2a7ee919777\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.664391 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-catalog-content\") pod \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\" (UID: \"fc279bbe-fdb9-4371-afaf-e2573ea04ce2\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.664474 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-utilities\") pod \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\" (UID: \"5d6ca13b-901b-4de9-bf79-494866c7ebdd\") " Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.664217 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f92b62e7-3351-4e65-a49d-49b6a6217796" (UID: "f92b62e7-3351-4e65-a49d-49b6a6217796"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.664390 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-utilities" (OuterVolumeSpecName: "utilities") pod "e6838739-1976-46a6-891d-e2a7ee919777" (UID: "e6838739-1976-46a6-891d-e2a7ee919777"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.664972 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-utilities" (OuterVolumeSpecName: "utilities") pod "fc279bbe-fdb9-4371-afaf-e2573ea04ce2" (UID: "fc279bbe-fdb9-4371-afaf-e2573ea04ce2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.665627 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-utilities" (OuterVolumeSpecName: "utilities") pod "5d6ca13b-901b-4de9-bf79-494866c7ebdd" (UID: "5d6ca13b-901b-4de9-bf79-494866c7ebdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.665806 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-utilities" (OuterVolumeSpecName: "utilities") pod "d0799f09-12e7-42a2-90c6-c1fb70e5c67f" (UID: "d0799f09-12e7-42a2-90c6-c1fb70e5c67f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.666991 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-utilities" (OuterVolumeSpecName: "utilities") pod "07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" (UID: "07d5db3c-f1d2-4b77-bcf7-07ef89073f9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.669282 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-kube-api-access-482lq" (OuterVolumeSpecName: "kube-api-access-482lq") pod "07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" (UID: "07d5db3c-f1d2-4b77-bcf7-07ef89073f9b"). InnerVolumeSpecName "kube-api-access-482lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.669568 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-kube-api-access-lf884" (OuterVolumeSpecName: "kube-api-access-lf884") pod "fc279bbe-fdb9-4371-afaf-e2573ea04ce2" (UID: "fc279bbe-fdb9-4371-afaf-e2573ea04ce2"). InnerVolumeSpecName "kube-api-access-lf884". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.671229 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6ca13b-901b-4de9-bf79-494866c7ebdd-kube-api-access-8ztlz" (OuterVolumeSpecName: "kube-api-access-8ztlz") pod "5d6ca13b-901b-4de9-bf79-494866c7ebdd" (UID: "5d6ca13b-901b-4de9-bf79-494866c7ebdd"). InnerVolumeSpecName "kube-api-access-8ztlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.671665 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92b62e7-3351-4e65-a49d-49b6a6217796-kube-api-access-jdv5j" (OuterVolumeSpecName: "kube-api-access-jdv5j") pod "f92b62e7-3351-4e65-a49d-49b6a6217796" (UID: "f92b62e7-3351-4e65-a49d-49b6a6217796"). InnerVolumeSpecName "kube-api-access-jdv5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.672361 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-kube-api-access-phwm7" (OuterVolumeSpecName: "kube-api-access-phwm7") pod "d0799f09-12e7-42a2-90c6-c1fb70e5c67f" (UID: "d0799f09-12e7-42a2-90c6-c1fb70e5c67f"). InnerVolumeSpecName "kube-api-access-phwm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.676598 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f92b62e7-3351-4e65-a49d-49b6a6217796" (UID: "f92b62e7-3351-4e65-a49d-49b6a6217796"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.679341 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6838739-1976-46a6-891d-e2a7ee919777-kube-api-access-fpngr" (OuterVolumeSpecName: "kube-api-access-fpngr") pod "e6838739-1976-46a6-891d-e2a7ee919777" (UID: "e6838739-1976-46a6-891d-e2a7ee919777"). InnerVolumeSpecName "kube-api-access-fpngr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.692800 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc279bbe-fdb9-4371-afaf-e2573ea04ce2" (UID: "fc279bbe-fdb9-4371-afaf-e2573ea04ce2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.707099 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" (UID: "07d5db3c-f1d2-4b77-bcf7-07ef89073f9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.729594 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6838739-1976-46a6-891d-e2a7ee919777" (UID: "e6838739-1976-46a6-891d-e2a7ee919777"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767342 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767382 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phwm7\" (UniqueName: \"kubernetes.io/projected/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-kube-api-access-phwm7\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767395 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ztlz\" (UniqueName: \"kubernetes.io/projected/5d6ca13b-901b-4de9-bf79-494866c7ebdd-kube-api-access-8ztlz\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767408 4771 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767419 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6838739-1976-46a6-891d-e2a7ee919777-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767429 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-482lq\" (UniqueName: \"kubernetes.io/projected/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-kube-api-access-482lq\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767441 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767452 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdv5j\" (UniqueName: \"kubernetes.io/projected/f92b62e7-3351-4e65-a49d-49b6a6217796-kube-api-access-jdv5j\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767462 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767472 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767481 4771 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f92b62e7-3351-4e65-a49d-49b6a6217796-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767491 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf884\" (UniqueName: \"kubernetes.io/projected/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-kube-api-access-lf884\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767501 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767511 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/e6838739-1976-46a6-891d-e2a7ee919777-kube-api-access-fpngr\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767522 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc279bbe-fdb9-4371-afaf-e2573ea04ce2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.767532 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.838909 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d6ca13b-901b-4de9-bf79-494866c7ebdd" (UID: "5d6ca13b-901b-4de9-bf79-494866c7ebdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.842528 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0799f09-12e7-42a2-90c6-c1fb70e5c67f" (UID: "d0799f09-12e7-42a2-90c6-c1fb70e5c67f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.868645 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ca13b-901b-4de9-bf79-494866c7ebdd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:02 crc kubenswrapper[4771]: I0129 09:12:02.868686 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0799f09-12e7-42a2-90c6-c1fb70e5c67f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.045930 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.050710 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.174630 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-utilities\") pod \"8baa3171-ca7e-40a7-bd19-dfae944704fa\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.174768 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tqtn\" (UniqueName: \"kubernetes.io/projected/8baa3171-ca7e-40a7-bd19-dfae944704fa-kube-api-access-9tqtn\") pod \"8baa3171-ca7e-40a7-bd19-dfae944704fa\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.174808 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-utilities\") pod \"d60426c7-faf6-4300-9ed0-160a76d81782\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.174831 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5vgq\" (UniqueName: \"kubernetes.io/projected/d60426c7-faf6-4300-9ed0-160a76d81782-kube-api-access-j5vgq\") pod \"d60426c7-faf6-4300-9ed0-160a76d81782\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.174852 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-catalog-content\") pod \"8baa3171-ca7e-40a7-bd19-dfae944704fa\" (UID: \"8baa3171-ca7e-40a7-bd19-dfae944704fa\") " Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.174902 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-catalog-content\") pod \"d60426c7-faf6-4300-9ed0-160a76d81782\" (UID: \"d60426c7-faf6-4300-9ed0-160a76d81782\") " Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.175847 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-utilities" (OuterVolumeSpecName: "utilities") pod "d60426c7-faf6-4300-9ed0-160a76d81782" (UID: "d60426c7-faf6-4300-9ed0-160a76d81782"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.176595 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-utilities" (OuterVolumeSpecName: "utilities") pod "8baa3171-ca7e-40a7-bd19-dfae944704fa" (UID: "8baa3171-ca7e-40a7-bd19-dfae944704fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.180630 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8baa3171-ca7e-40a7-bd19-dfae944704fa-kube-api-access-9tqtn" (OuterVolumeSpecName: "kube-api-access-9tqtn") pod "8baa3171-ca7e-40a7-bd19-dfae944704fa" (UID: "8baa3171-ca7e-40a7-bd19-dfae944704fa"). InnerVolumeSpecName "kube-api-access-9tqtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.180938 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60426c7-faf6-4300-9ed0-160a76d81782-kube-api-access-j5vgq" (OuterVolumeSpecName: "kube-api-access-j5vgq") pod "d60426c7-faf6-4300-9ed0-160a76d81782" (UID: "d60426c7-faf6-4300-9ed0-160a76d81782"). InnerVolumeSpecName "kube-api-access-j5vgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.238847 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8baa3171-ca7e-40a7-bd19-dfae944704fa" (UID: "8baa3171-ca7e-40a7-bd19-dfae944704fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.242982 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d60426c7-faf6-4300-9ed0-160a76d81782" (UID: "d60426c7-faf6-4300-9ed0-160a76d81782"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.276092 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.276406 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.276491 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tqtn\" (UniqueName: \"kubernetes.io/projected/8baa3171-ca7e-40a7-bd19-dfae944704fa-kube-api-access-9tqtn\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.276553 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d60426c7-faf6-4300-9ed0-160a76d81782-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.276613 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5vgq\" (UniqueName: \"kubernetes.io/projected/d60426c7-faf6-4300-9ed0-160a76d81782-kube-api-access-j5vgq\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.276678 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baa3171-ca7e-40a7-bd19-dfae944704fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.357128 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qs84" event={"ID":"07d5db3c-f1d2-4b77-bcf7-07ef89073f9b","Type":"ContainerDied","Data":"595a22fdee8437cdcb4b8ecb91884d9f9143f393a1193de26520030646c0d605"} Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.357174 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qs84" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.357184 4771 scope.go:117] "RemoveContainer" containerID="cb1186317a8eed31b42343a2a2ba52870993be1470d9a47b5dda8f57dbed1bc6" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.360468 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-478nk" event={"ID":"5d6ca13b-901b-4de9-bf79-494866c7ebdd","Type":"ContainerDied","Data":"cd46e6d93b1abfd1ee48f7c53223bc4808a7b3fa50aba2a2dd8d85207af80d68"} Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.360513 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-478nk" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.363143 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwlvt" event={"ID":"8baa3171-ca7e-40a7-bd19-dfae944704fa","Type":"ContainerDied","Data":"f5bbbe22190017c5f9b7e27da21460d4123131cca77ec82def5195fe2d3266ed"} Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.363342 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwlvt" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.370976 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7lqm" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.371006 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7lqm" event={"ID":"d60426c7-faf6-4300-9ed0-160a76d81782","Type":"ContainerDied","Data":"3325c1adc2f671dab7a5d79ae30019c158639ce50e61cd7274b4c30f49a93ff6"} Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.374095 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5q79" event={"ID":"fc279bbe-fdb9-4371-afaf-e2573ea04ce2","Type":"ContainerDied","Data":"16308c45fb65079b77f5b0d1eb46c9b416c00f85e98916dd43141d3ac05b54ec"} Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.374226 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5q79" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.379149 4771 scope.go:117] "RemoveContainer" containerID="5a24837ef2995d85ad5b43aa7da231c912ee375a2a23bdb2186d146f4a018bf4" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.387135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vphmv" event={"ID":"d0799f09-12e7-42a2-90c6-c1fb70e5c67f","Type":"ContainerDied","Data":"39f4bdf527d1f6c8a25108c86d674f2593fd148ea782f0846c6e8bfddf2295d6"} Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.387171 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sffpf" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.387264 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vphmv" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.387396 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nqgdw" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.396563 4771 scope.go:117] "RemoveContainer" containerID="aa8d61b0bcee6e2c65cdc4b6f4a63b506900990e331a05b24f4b1d43d98df1bb" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.399895 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qs84"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.405761 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qs84"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.409965 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-478nk"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.414564 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-478nk"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.419403 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwlvt"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.423901 4771 scope.go:117] "RemoveContainer" containerID="f990be08d2352d25311aaf81d22cc6ed4e1636030eec10737bbe8d14a1905245" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.425352 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hwlvt"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.432010 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5q79"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.440768 4771 scope.go:117] "RemoveContainer" containerID="752b2577b67abf2391d1da2d125166dbb137d7ed002ef3c0831a607f7d543512" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.441282 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5q79"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.457380 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vphmv"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.461966 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vphmv"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.465068 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqgdw"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.482486 4771 scope.go:117] "RemoveContainer" containerID="89c7497ee0081f84ff72cf8a897b39558e74ad7cc8e2b352460716ca7e847234" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.483448 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nqgdw"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.494734 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sffpf"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.497347 4771 scope.go:117] "RemoveContainer" containerID="743f49f20cab1ff2ffd97a6d95b9f4f5b89548fc45afb2c6e320244738d35ec2" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.498610 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sffpf"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.502476 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j7lqm"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.505891 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j7lqm"] Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.513813 4771 scope.go:117] "RemoveContainer" containerID="584b58afaed6973119c456c25f4c31aefcd15bf7238e21c087d398b624f5c8ae" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.535309 4771 scope.go:117] "RemoveContainer" containerID="7670b5785e9044afd3132706311067cd5a2197f862eee172f6ae13b19e4c7244" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.554195 4771 scope.go:117] "RemoveContainer" containerID="36dcb4d47bc3146cd41860f4909a0eebc7c531a90da4f133d90bd338936e7957" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.569767 4771 scope.go:117] "RemoveContainer" containerID="0cf04b054a7ca5ee75cdaa7278263de34c00ec7ec61929e1ce2240b691cad1d3" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.585930 4771 scope.go:117] "RemoveContainer" containerID="f359effebd4f093b212cdf2ac2e319b1503a2c703dd1f69a0f4715acc9eb5eca" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.604059 4771 scope.go:117] "RemoveContainer" containerID="f809be2b03928edc8459163d7c774c445b990f733b068771149c872d0f0db5e3" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.619190 4771 scope.go:117] "RemoveContainer" containerID="ad9d0e43cdf87bfffcf9cf91d9085c55b399713f712e07f809f33f52c0b9524b" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.635727 4771 scope.go:117] "RemoveContainer" containerID="b0d036560c5dac298417c6dd8694cf023bd476badbad178ff2764d8cecfb68a4" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.652027 4771 scope.go:117] "RemoveContainer" containerID="0cc1592449be9ccbc8c377ac6184028de2531a89da560a64ead5ae83cc14e283" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.665910 4771 scope.go:117] "RemoveContainer" containerID="638eb06582b2313f0e3996a76eae2a3875ff5451dab8dbb4e2aaa6ec218f9a90" Jan 29 09:12:03 crc kubenswrapper[4771]: I0129 09:12:03.685120 4771 scope.go:117] "RemoveContainer" containerID="4ecc763a2ad8addcdf507e44d92e6cf6e20fb37b71028706cb1b56804927b443" Jan 29 09:12:04 crc kubenswrapper[4771]: I0129 09:12:04.848122 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" path="/var/lib/kubelet/pods/07d5db3c-f1d2-4b77-bcf7-07ef89073f9b/volumes" Jan 29 09:12:04 crc kubenswrapper[4771]: I0129 09:12:04.849422 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" path="/var/lib/kubelet/pods/5d6ca13b-901b-4de9-bf79-494866c7ebdd/volumes" Jan 29 09:12:04 crc kubenswrapper[4771]: I0129 09:12:04.850308 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" path="/var/lib/kubelet/pods/8baa3171-ca7e-40a7-bd19-dfae944704fa/volumes" Jan 29 09:12:04 crc kubenswrapper[4771]: I0129 09:12:04.851789 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" path="/var/lib/kubelet/pods/d0799f09-12e7-42a2-90c6-c1fb70e5c67f/volumes" Jan 29 09:12:04 crc kubenswrapper[4771]: I0129 09:12:04.852577 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" path="/var/lib/kubelet/pods/d60426c7-faf6-4300-9ed0-160a76d81782/volumes" Jan 29 09:12:04 crc kubenswrapper[4771]: I0129 09:12:04.854069 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6838739-1976-46a6-891d-e2a7ee919777" path="/var/lib/kubelet/pods/e6838739-1976-46a6-891d-e2a7ee919777/volumes" Jan 29 09:12:04 crc kubenswrapper[4771]: I0129 09:12:04.854950 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f92b62e7-3351-4e65-a49d-49b6a6217796" path="/var/lib/kubelet/pods/f92b62e7-3351-4e65-a49d-49b6a6217796/volumes" Jan 29 09:12:04 crc kubenswrapper[4771]: I0129 09:12:04.855558 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" path="/var/lib/kubelet/pods/fc279bbe-fdb9-4371-afaf-e2573ea04ce2/volumes" Jan 29 09:12:08 crc kubenswrapper[4771]: I0129 09:12:08.699312 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:12:08 crc kubenswrapper[4771]: I0129 09:12:08.703549 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:12:08 crc kubenswrapper[4771]: I0129 09:12:08.779443 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 09:12:09 crc kubenswrapper[4771]: I0129 09:12:09.433455 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 09:12:10 crc kubenswrapper[4771]: I0129 09:12:10.639565 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.016606 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ts5f5"] Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017531 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017543 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017552 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017558 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017573 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92b62e7-3351-4e65-a49d-49b6a6217796" containerName="marketplace-operator" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017580 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92b62e7-3351-4e65-a49d-49b6a6217796" containerName="marketplace-operator" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017592 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017599 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017610 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017617 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017626 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017633 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017644 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017651 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017658 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017664 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017673 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017680 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017691 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017700 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017729 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017738 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017753 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017763 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017775 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017781 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017789 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017796 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017808 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017816 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017824 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017831 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017842 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017850 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017858 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017864 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017872 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017878 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="extract-content" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017887 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017893 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017901 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017906 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017914 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017922 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.017930 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.017936 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" containerName="extract-utilities" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018027 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6838739-1976-46a6-891d-e2a7ee919777" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018037 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6ca13b-901b-4de9-bf79-494866c7ebdd" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018046 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8baa3171-ca7e-40a7-bd19-dfae944704fa" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018054 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc279bbe-fdb9-4371-afaf-e2573ea04ce2" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018060 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0799f09-12e7-42a2-90c6-c1fb70e5c67f" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018069 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018076 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92b62e7-3351-4e65-a49d-49b6a6217796" containerName="marketplace-operator" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018085 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60426c7-faf6-4300-9ed0-160a76d81782" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018091 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d5db3c-f1d2-4b77-bcf7-07ef89073f9b" containerName="registry-server" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.018482 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.024998 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.025173 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.025295 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.025540 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.029023 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.049128 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ts5f5"] Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.065268 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd"] Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.065530 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" podUID="55f07f08-7620-440d-81b8-39fdb35d84a3" containerName="route-controller-manager" containerID="cri-o://ec1df1c73766744dc77c30ce776491db53f08e823c7aba9750d343bb5ff6cffb" gracePeriod=30 Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.089424 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ld9n"] Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.089724 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" podUID="83770b88-0e8f-4356-b13e-a4deeb9c8b2a" containerName="controller-manager" containerID="cri-o://087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5" gracePeriod=30 Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.096446 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwdb7\" (UniqueName: \"kubernetes.io/projected/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-kube-api-access-nwdb7\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.096514 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.096554 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.198419 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwdb7\" (UniqueName: \"kubernetes.io/projected/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-kube-api-access-nwdb7\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.198768 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.198904 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.200632 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.206235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.233104 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwdb7\" (UniqueName: \"kubernetes.io/projected/f98a10ab-5df8-4994-b6a3-c62c3c3a8c82-kube-api-access-nwdb7\") pod \"marketplace-operator-79b997595-ts5f5\" (UID: \"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82\") " pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.336811 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.499795 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.515039 4771 generic.go:334] "Generic (PLEG): container finished" podID="83770b88-0e8f-4356-b13e-a4deeb9c8b2a" containerID="087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5" exitCode=0 Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.515145 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" event={"ID":"83770b88-0e8f-4356-b13e-a4deeb9c8b2a","Type":"ContainerDied","Data":"087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5"} Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.515443 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" event={"ID":"83770b88-0e8f-4356-b13e-a4deeb9c8b2a","Type":"ContainerDied","Data":"0dfe81e18deff68270040d8d81ddfdaff70466c2c10114b00dd821c67dfe6ac3"} Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.515466 4771 scope.go:117] "RemoveContainer" containerID="087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.522302 4771 generic.go:334] "Generic (PLEG): container finished" podID="55f07f08-7620-440d-81b8-39fdb35d84a3" containerID="ec1df1c73766744dc77c30ce776491db53f08e823c7aba9750d343bb5ff6cffb" exitCode=0 Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.522342 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" event={"ID":"55f07f08-7620-440d-81b8-39fdb35d84a3","Type":"ContainerDied","Data":"ec1df1c73766744dc77c30ce776491db53f08e823c7aba9750d343bb5ff6cffb"} Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.562110 4771 scope.go:117] "RemoveContainer" containerID="087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5" Jan 29 09:12:18 crc kubenswrapper[4771]: E0129 09:12:18.562642 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5\": container with ID starting with 087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5 not found: ID does not exist" containerID="087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.562699 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5"} err="failed to get container status \"087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5\": rpc error: code = NotFound desc = could not find container \"087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5\": container with ID starting with 087b50dff6c9ce142cfdaba413fb3e7e0f1c234f8851cb62c3ca8336bf9740c5 not found: ID does not exist" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.594494 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.603377 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-config\") pod \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.606969 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-config" (OuterVolumeSpecName: "config") pod "83770b88-0e8f-4356-b13e-a4deeb9c8b2a" (UID: "83770b88-0e8f-4356-b13e-a4deeb9c8b2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.607353 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-client-ca\") pod \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.607442 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-proxy-ca-bundles\") pod \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.607473 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-serving-cert\") pod \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.608366 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wt2x\" (UniqueName: \"kubernetes.io/projected/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-kube-api-access-2wt2x\") pod \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\" (UID: \"83770b88-0e8f-4356-b13e-a4deeb9c8b2a\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.607789 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-client-ca" (OuterVolumeSpecName: "client-ca") pod "83770b88-0e8f-4356-b13e-a4deeb9c8b2a" (UID: "83770b88-0e8f-4356-b13e-a4deeb9c8b2a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.608254 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "83770b88-0e8f-4356-b13e-a4deeb9c8b2a" (UID: "83770b88-0e8f-4356-b13e-a4deeb9c8b2a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.613239 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83770b88-0e8f-4356-b13e-a4deeb9c8b2a" (UID: "83770b88-0e8f-4356-b13e-a4deeb9c8b2a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.614173 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.614213 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.614228 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.614243 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.621365 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ts5f5"] Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.641790 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-kube-api-access-2wt2x" (OuterVolumeSpecName: "kube-api-access-2wt2x") pod "83770b88-0e8f-4356-b13e-a4deeb9c8b2a" (UID: "83770b88-0e8f-4356-b13e-a4deeb9c8b2a"). InnerVolumeSpecName "kube-api-access-2wt2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.715874 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-config\") pod \"55f07f08-7620-440d-81b8-39fdb35d84a3\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.716688 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgsrk\" (UniqueName: \"kubernetes.io/projected/55f07f08-7620-440d-81b8-39fdb35d84a3-kube-api-access-lgsrk\") pod \"55f07f08-7620-440d-81b8-39fdb35d84a3\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.717033 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-client-ca\") pod \"55f07f08-7620-440d-81b8-39fdb35d84a3\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.717084 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f07f08-7620-440d-81b8-39fdb35d84a3-serving-cert\") pod \"55f07f08-7620-440d-81b8-39fdb35d84a3\" (UID: \"55f07f08-7620-440d-81b8-39fdb35d84a3\") " Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.717069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-config" (OuterVolumeSpecName: "config") pod "55f07f08-7620-440d-81b8-39fdb35d84a3" (UID: "55f07f08-7620-440d-81b8-39fdb35d84a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.717927 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-client-ca" (OuterVolumeSpecName: "client-ca") pod "55f07f08-7620-440d-81b8-39fdb35d84a3" (UID: "55f07f08-7620-440d-81b8-39fdb35d84a3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.720748 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.720780 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wt2x\" (UniqueName: \"kubernetes.io/projected/83770b88-0e8f-4356-b13e-a4deeb9c8b2a-kube-api-access-2wt2x\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.720790 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55f07f08-7620-440d-81b8-39fdb35d84a3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.722000 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f07f08-7620-440d-81b8-39fdb35d84a3-kube-api-access-lgsrk" (OuterVolumeSpecName: "kube-api-access-lgsrk") pod "55f07f08-7620-440d-81b8-39fdb35d84a3" (UID: "55f07f08-7620-440d-81b8-39fdb35d84a3"). InnerVolumeSpecName "kube-api-access-lgsrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.722362 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f07f08-7620-440d-81b8-39fdb35d84a3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55f07f08-7620-440d-81b8-39fdb35d84a3" (UID: "55f07f08-7620-440d-81b8-39fdb35d84a3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.821907 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgsrk\" (UniqueName: \"kubernetes.io/projected/55f07f08-7620-440d-81b8-39fdb35d84a3-kube-api-access-lgsrk\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:18 crc kubenswrapper[4771]: I0129 09:12:18.821944 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f07f08-7620-440d-81b8-39fdb35d84a3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.529747 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" event={"ID":"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82","Type":"ContainerStarted","Data":"a2504953743dafe32437c6425c5b679f411f92f2da024835ffdd683783e90126"} Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.530184 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" event={"ID":"f98a10ab-5df8-4994-b6a3-c62c3c3a8c82","Type":"ContainerStarted","Data":"cabdd070b3c0823a4bee51f3d89c9bc9150608b051ea6940c4dfd405af7077b1"} Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.530208 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.533932 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.534656 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5ld9n" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.536962 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" event={"ID":"55f07f08-7620-440d-81b8-39fdb35d84a3","Type":"ContainerDied","Data":"2720d6c1a629a441023a2b3e69724bf5ce11ffe462018963b6e804c1152935b6"} Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.537033 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.537109 4771 scope.go:117] "RemoveContainer" containerID="ec1df1c73766744dc77c30ce776491db53f08e823c7aba9750d343bb5ff6cffb" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.569476 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ts5f5" podStartSLOduration=2.569454645 podStartE2EDuration="2.569454645s" podCreationTimestamp="2026-01-29 09:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:12:19.551601871 +0000 UTC m=+359.674442098" watchObservedRunningTime="2026-01-29 09:12:19.569454645 +0000 UTC m=+359.692294882" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.587640 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ld9n"] Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.595985 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ld9n"] Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.609619 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd"] Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.618577 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-f9qrd"] Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.847482 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g"] Jan 29 09:12:19 crc kubenswrapper[4771]: E0129 09:12:19.847777 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83770b88-0e8f-4356-b13e-a4deeb9c8b2a" containerName="controller-manager" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.847795 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="83770b88-0e8f-4356-b13e-a4deeb9c8b2a" containerName="controller-manager" Jan 29 09:12:19 crc kubenswrapper[4771]: E0129 09:12:19.847808 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f07f08-7620-440d-81b8-39fdb35d84a3" containerName="route-controller-manager" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.847818 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f07f08-7620-440d-81b8-39fdb35d84a3" containerName="route-controller-manager" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.847934 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f07f08-7620-440d-81b8-39fdb35d84a3" containerName="route-controller-manager" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.847949 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="83770b88-0e8f-4356-b13e-a4deeb9c8b2a" containerName="controller-manager" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.848420 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.851828 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.851967 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.852083 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.852214 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.852659 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.852674 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.863206 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g"] Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.935335 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dba9c4-3b68-402a-b3b8-4e0fd478571c-client-ca\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.935394 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29vzl\" (UniqueName: \"kubernetes.io/projected/07dba9c4-3b68-402a-b3b8-4e0fd478571c-kube-api-access-29vzl\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.935757 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dba9c4-3b68-402a-b3b8-4e0fd478571c-serving-cert\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:19 crc kubenswrapper[4771]: I0129 09:12:19.935826 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dba9c4-3b68-402a-b3b8-4e0fd478571c-config\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.037371 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dba9c4-3b68-402a-b3b8-4e0fd478571c-client-ca\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.037450 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29vzl\" (UniqueName: \"kubernetes.io/projected/07dba9c4-3b68-402a-b3b8-4e0fd478571c-kube-api-access-29vzl\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.037538 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dba9c4-3b68-402a-b3b8-4e0fd478571c-serving-cert\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.037620 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dba9c4-3b68-402a-b3b8-4e0fd478571c-config\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.038484 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dba9c4-3b68-402a-b3b8-4e0fd478571c-client-ca\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.038661 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dba9c4-3b68-402a-b3b8-4e0fd478571c-config\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.042307 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dba9c4-3b68-402a-b3b8-4e0fd478571c-serving-cert\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.056406 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29vzl\" (UniqueName: \"kubernetes.io/projected/07dba9c4-3b68-402a-b3b8-4e0fd478571c-kube-api-access-29vzl\") pod \"route-controller-manager-854cf458cd-ll56g\" (UID: \"07dba9c4-3b68-402a-b3b8-4e0fd478571c\") " pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.069652 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-68k8j"] Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.070831 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.072812 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.077317 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.077447 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.077630 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.077733 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.077898 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.083751 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.092140 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-68k8j"] Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.165328 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.239293 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-serving-cert\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.239381 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59dj\" (UniqueName: \"kubernetes.io/projected/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-kube-api-access-k59dj\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.239414 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-config\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.239457 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-client-ca\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.239491 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-proxy-ca-bundles\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.340367 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59dj\" (UniqueName: \"kubernetes.io/projected/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-kube-api-access-k59dj\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.340435 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-config\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.340486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-client-ca\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.340523 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-proxy-ca-bundles\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.340603 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-serving-cert\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.342105 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-client-ca\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.342877 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-proxy-ca-bundles\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.343382 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-config\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.354634 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-serving-cert\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.358693 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59dj\" (UniqueName: \"kubernetes.io/projected/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-kube-api-access-k59dj\") pod \"controller-manager-5964cbcb45-68k8j\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.399569 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.602893 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g"] Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.623368 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-68k8j"] Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.847897 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f07f08-7620-440d-81b8-39fdb35d84a3" path="/var/lib/kubelet/pods/55f07f08-7620-440d-81b8-39fdb35d84a3/volumes" Jan 29 09:12:20 crc kubenswrapper[4771]: I0129 09:12:20.848830 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83770b88-0e8f-4356-b13e-a4deeb9c8b2a" path="/var/lib/kubelet/pods/83770b88-0e8f-4356-b13e-a4deeb9c8b2a/volumes" Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.564752 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" event={"ID":"07dba9c4-3b68-402a-b3b8-4e0fd478571c","Type":"ContainerStarted","Data":"2cc808108152c05860837fe3ae7a3efa64ba5aa743b0d938228abb27761e9f3f"} Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.564814 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" event={"ID":"07dba9c4-3b68-402a-b3b8-4e0fd478571c","Type":"ContainerStarted","Data":"d48b562a6db4a13de898cdef3adf73fd9e6765ec78ab23e2649d55fffe53bcb0"} Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.565326 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.569000 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" event={"ID":"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb","Type":"ContainerStarted","Data":"2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6"} Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.569038 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" event={"ID":"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb","Type":"ContainerStarted","Data":"a8dcbc81ecbf735609d19955ffcb50c3b78e01617a32d6ee113af4d313417206"} Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.569052 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.575502 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.577853 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" Jan 29 09:12:21 crc kubenswrapper[4771]: I0129 09:12:21.590168 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-854cf458cd-ll56g" podStartSLOduration=2.5901450280000002 podStartE2EDuration="2.590145028s" podCreationTimestamp="2026-01-29 09:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:12:21.585431569 +0000 UTC m=+361.708271796" watchObservedRunningTime="2026-01-29 09:12:21.590145028 +0000 UTC m=+361.712985265" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.369250 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" podStartSLOduration=21.369210115 podStartE2EDuration="21.369210115s" podCreationTimestamp="2026-01-29 09:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:12:21.650923461 +0000 UTC m=+361.773763688" watchObservedRunningTime="2026-01-29 09:12:39.369210115 +0000 UTC m=+379.492050342" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.370948 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4qdmm"] Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.372530 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.376667 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.381261 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qdmm"] Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.416466 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8prmz\" (UniqueName: \"kubernetes.io/projected/2093c052-f157-4807-9420-92386e715703-kube-api-access-8prmz\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.416949 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-catalog-content\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.417014 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-utilities\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.519096 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-utilities\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.519300 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8prmz\" (UniqueName: \"kubernetes.io/projected/2093c052-f157-4807-9420-92386e715703-kube-api-access-8prmz\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.519331 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-catalog-content\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.520016 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-utilities\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.520078 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-catalog-content\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.543810 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8prmz\" (UniqueName: \"kubernetes.io/projected/2093c052-f157-4807-9420-92386e715703-kube-api-access-8prmz\") pod \"certified-operators-4qdmm\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.565544 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdkd"] Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.567008 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.571760 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.578229 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdkd"] Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.621659 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shkdn\" (UniqueName: \"kubernetes.io/projected/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-kube-api-access-shkdn\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.621824 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-utilities\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.621874 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-catalog-content\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.693731 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.722850 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shkdn\" (UniqueName: \"kubernetes.io/projected/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-kube-api-access-shkdn\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.722955 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-utilities\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.723003 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-catalog-content\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.723547 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-catalog-content\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.723658 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-utilities\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.746215 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shkdn\" (UniqueName: \"kubernetes.io/projected/8f515b08-46ce-4d24-ba30-d3b4b9bee0f1-kube-api-access-shkdn\") pod \"redhat-marketplace-nfdkd\" (UID: \"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1\") " pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:39 crc kubenswrapper[4771]: I0129 09:12:39.889136 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:40 crc kubenswrapper[4771]: I0129 09:12:40.185047 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qdmm"] Jan 29 09:12:40 crc kubenswrapper[4771]: W0129 09:12:40.188962 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2093c052_f157_4807_9420_92386e715703.slice/crio-5fe52f020935d5b0bd10499102b43b6d48a334c93960a8ee59b5b9aad6a51961 WatchSource:0}: Error finding container 5fe52f020935d5b0bd10499102b43b6d48a334c93960a8ee59b5b9aad6a51961: Status 404 returned error can't find the container with id 5fe52f020935d5b0bd10499102b43b6d48a334c93960a8ee59b5b9aad6a51961 Jan 29 09:12:40 crc kubenswrapper[4771]: I0129 09:12:40.333621 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdkd"] Jan 29 09:12:40 crc kubenswrapper[4771]: W0129 09:12:40.334487 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f515b08_46ce_4d24_ba30_d3b4b9bee0f1.slice/crio-009e570715d6ae64a2baa2fa5eab12f04404e32701ebfcaa4fb4c74fd03fc1c6 WatchSource:0}: Error finding container 009e570715d6ae64a2baa2fa5eab12f04404e32701ebfcaa4fb4c74fd03fc1c6: Status 404 returned error can't find the container with id 009e570715d6ae64a2baa2fa5eab12f04404e32701ebfcaa4fb4c74fd03fc1c6 Jan 29 09:12:40 crc kubenswrapper[4771]: I0129 09:12:40.681086 4771 generic.go:334] "Generic (PLEG): container finished" podID="2093c052-f157-4807-9420-92386e715703" containerID="d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c" exitCode=0 Jan 29 09:12:40 crc kubenswrapper[4771]: I0129 09:12:40.681398 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qdmm" event={"ID":"2093c052-f157-4807-9420-92386e715703","Type":"ContainerDied","Data":"d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c"} Jan 29 09:12:40 crc kubenswrapper[4771]: I0129 09:12:40.681558 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qdmm" event={"ID":"2093c052-f157-4807-9420-92386e715703","Type":"ContainerStarted","Data":"5fe52f020935d5b0bd10499102b43b6d48a334c93960a8ee59b5b9aad6a51961"} Jan 29 09:12:40 crc kubenswrapper[4771]: I0129 09:12:40.685362 4771 generic.go:334] "Generic (PLEG): container finished" podID="8f515b08-46ce-4d24-ba30-d3b4b9bee0f1" containerID="aa51e7a3bd88e718d8bd56f007e4a946517fa39b91913a6e72000735c65c9575" exitCode=0 Jan 29 09:12:40 crc kubenswrapper[4771]: I0129 09:12:40.685415 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdkd" event={"ID":"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1","Type":"ContainerDied","Data":"aa51e7a3bd88e718d8bd56f007e4a946517fa39b91913a6e72000735c65c9575"} Jan 29 09:12:40 crc kubenswrapper[4771]: I0129 09:12:40.685452 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdkd" event={"ID":"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1","Type":"ContainerStarted","Data":"009e570715d6ae64a2baa2fa5eab12f04404e32701ebfcaa4fb4c74fd03fc1c6"} Jan 29 09:12:41 crc kubenswrapper[4771]: I0129 09:12:41.764483 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4tk6z"] Jan 29 09:12:41 crc kubenswrapper[4771]: I0129 09:12:41.766970 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:41 crc kubenswrapper[4771]: I0129 09:12:41.768964 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 09:12:41 crc kubenswrapper[4771]: I0129 09:12:41.784481 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4tk6z"] Jan 29 09:12:41 crc kubenswrapper[4771]: I0129 09:12:41.951629 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03f3394-a05b-4de0-ba06-1191b58d6fa8-utilities\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:41 crc kubenswrapper[4771]: I0129 09:12:41.952192 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03f3394-a05b-4de0-ba06-1191b58d6fa8-catalog-content\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:41 crc kubenswrapper[4771]: I0129 09:12:41.952286 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvgg2\" (UniqueName: \"kubernetes.io/projected/c03f3394-a05b-4de0-ba06-1191b58d6fa8-kube-api-access-tvgg2\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.053166 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvgg2\" (UniqueName: \"kubernetes.io/projected/c03f3394-a05b-4de0-ba06-1191b58d6fa8-kube-api-access-tvgg2\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.053339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03f3394-a05b-4de0-ba06-1191b58d6fa8-utilities\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.053393 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03f3394-a05b-4de0-ba06-1191b58d6fa8-catalog-content\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.053857 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c03f3394-a05b-4de0-ba06-1191b58d6fa8-utilities\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.053987 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c03f3394-a05b-4de0-ba06-1191b58d6fa8-catalog-content\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.081631 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvgg2\" (UniqueName: \"kubernetes.io/projected/c03f3394-a05b-4de0-ba06-1191b58d6fa8-kube-api-access-tvgg2\") pod \"redhat-operators-4tk6z\" (UID: \"c03f3394-a05b-4de0-ba06-1191b58d6fa8\") " pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.089464 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.171965 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sss7k"] Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.173950 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.179218 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.181685 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sss7k"] Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.256398 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ca40d1-6622-443e-b79e-fe6896d2f66d-utilities\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.256605 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ca40d1-6622-443e-b79e-fe6896d2f66d-catalog-content\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.256961 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jv6h\" (UniqueName: \"kubernetes.io/projected/48ca40d1-6622-443e-b79e-fe6896d2f66d-kube-api-access-6jv6h\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.357783 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jv6h\" (UniqueName: \"kubernetes.io/projected/48ca40d1-6622-443e-b79e-fe6896d2f66d-kube-api-access-6jv6h\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.360372 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ca40d1-6622-443e-b79e-fe6896d2f66d-utilities\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.360414 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ca40d1-6622-443e-b79e-fe6896d2f66d-catalog-content\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.360884 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ca40d1-6622-443e-b79e-fe6896d2f66d-utilities\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.362243 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ca40d1-6622-443e-b79e-fe6896d2f66d-catalog-content\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.381433 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jv6h\" (UniqueName: \"kubernetes.io/projected/48ca40d1-6622-443e-b79e-fe6896d2f66d-kube-api-access-6jv6h\") pod \"community-operators-sss7k\" (UID: \"48ca40d1-6622-443e-b79e-fe6896d2f66d\") " pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.519733 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.542479 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4tk6z"] Jan 29 09:12:42 crc kubenswrapper[4771]: W0129 09:12:42.586086 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc03f3394_a05b_4de0_ba06_1191b58d6fa8.slice/crio-1fc8406f0c56421fc9a7dc1a82b7e189c29209c0bcc61444c1d54b65ae2601ba WatchSource:0}: Error finding container 1fc8406f0c56421fc9a7dc1a82b7e189c29209c0bcc61444c1d54b65ae2601ba: Status 404 returned error can't find the container with id 1fc8406f0c56421fc9a7dc1a82b7e189c29209c0bcc61444c1d54b65ae2601ba Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.697552 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tk6z" event={"ID":"c03f3394-a05b-4de0-ba06-1191b58d6fa8","Type":"ContainerStarted","Data":"1fc8406f0c56421fc9a7dc1a82b7e189c29209c0bcc61444c1d54b65ae2601ba"} Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.703877 4771 generic.go:334] "Generic (PLEG): container finished" podID="2093c052-f157-4807-9420-92386e715703" containerID="a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a" exitCode=0 Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.703927 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qdmm" event={"ID":"2093c052-f157-4807-9420-92386e715703","Type":"ContainerDied","Data":"a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a"} Jan 29 09:12:42 crc kubenswrapper[4771]: I0129 09:12:42.953788 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sss7k"] Jan 29 09:12:43 crc kubenswrapper[4771]: I0129 09:12:43.711110 4771 generic.go:334] "Generic (PLEG): container finished" podID="8f515b08-46ce-4d24-ba30-d3b4b9bee0f1" containerID="3965a7e0496df8d647b0ea1e3ed10d9a2324ad7b119068d058a1dc15f7926640" exitCode=0 Jan 29 09:12:43 crc kubenswrapper[4771]: I0129 09:12:43.711311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdkd" event={"ID":"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1","Type":"ContainerDied","Data":"3965a7e0496df8d647b0ea1e3ed10d9a2324ad7b119068d058a1dc15f7926640"} Jan 29 09:12:43 crc kubenswrapper[4771]: I0129 09:12:43.713646 4771 generic.go:334] "Generic (PLEG): container finished" podID="c03f3394-a05b-4de0-ba06-1191b58d6fa8" containerID="a830c67628bf0eac26f9611e6c225e1c1990c3ef5988c2a5ce45ef1e3696bba4" exitCode=0 Jan 29 09:12:43 crc kubenswrapper[4771]: I0129 09:12:43.713847 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tk6z" event={"ID":"c03f3394-a05b-4de0-ba06-1191b58d6fa8","Type":"ContainerDied","Data":"a830c67628bf0eac26f9611e6c225e1c1990c3ef5988c2a5ce45ef1e3696bba4"} Jan 29 09:12:43 crc kubenswrapper[4771]: I0129 09:12:43.719603 4771 generic.go:334] "Generic (PLEG): container finished" podID="48ca40d1-6622-443e-b79e-fe6896d2f66d" containerID="36f52e07e5fcc53738b3ade84067f6c330e86262c47182a88a600406a966c732" exitCode=0 Jan 29 09:12:43 crc kubenswrapper[4771]: I0129 09:12:43.719649 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sss7k" event={"ID":"48ca40d1-6622-443e-b79e-fe6896d2f66d","Type":"ContainerDied","Data":"36f52e07e5fcc53738b3ade84067f6c330e86262c47182a88a600406a966c732"} Jan 29 09:12:43 crc kubenswrapper[4771]: I0129 09:12:43.719682 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sss7k" event={"ID":"48ca40d1-6622-443e-b79e-fe6896d2f66d","Type":"ContainerStarted","Data":"dc811c5dd24fe9441f612cd57a0008113d824f68ee9f4a3a7bdb4f1518da44d4"} Jan 29 09:12:44 crc kubenswrapper[4771]: I0129 09:12:44.271817 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:12:44 crc kubenswrapper[4771]: I0129 09:12:44.271908 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:12:44 crc kubenswrapper[4771]: I0129 09:12:44.730777 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdkd" event={"ID":"8f515b08-46ce-4d24-ba30-d3b4b9bee0f1","Type":"ContainerStarted","Data":"0152f3830f4dbefd11d3e94251b16cf28bc31509f469f3075f73c2a1e6669719"} Jan 29 09:12:44 crc kubenswrapper[4771]: I0129 09:12:44.732959 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qdmm" event={"ID":"2093c052-f157-4807-9420-92386e715703","Type":"ContainerStarted","Data":"caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231"} Jan 29 09:12:44 crc kubenswrapper[4771]: I0129 09:12:44.754891 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4qdmm" podStartSLOduration=2.871958779 podStartE2EDuration="5.754872594s" podCreationTimestamp="2026-01-29 09:12:39 +0000 UTC" firstStartedPulling="2026-01-29 09:12:40.685206268 +0000 UTC m=+380.808046495" lastFinishedPulling="2026-01-29 09:12:43.568120083 +0000 UTC m=+383.690960310" observedRunningTime="2026-01-29 09:12:44.750550247 +0000 UTC m=+384.873390484" watchObservedRunningTime="2026-01-29 09:12:44.754872594 +0000 UTC m=+384.877712821" Jan 29 09:12:45 crc kubenswrapper[4771]: I0129 09:12:45.743561 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tk6z" event={"ID":"c03f3394-a05b-4de0-ba06-1191b58d6fa8","Type":"ContainerStarted","Data":"40a304124dacd2605b81279d301cb49f2c165445971a939f504f3cdb9467730b"} Jan 29 09:12:45 crc kubenswrapper[4771]: I0129 09:12:45.745959 4771 generic.go:334] "Generic (PLEG): container finished" podID="48ca40d1-6622-443e-b79e-fe6896d2f66d" containerID="3b40af1e96cdcb8339e13ea2f0bc42de1f591709c1f7469f78be2937e61acc56" exitCode=0 Jan 29 09:12:45 crc kubenswrapper[4771]: I0129 09:12:45.745993 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sss7k" event={"ID":"48ca40d1-6622-443e-b79e-fe6896d2f66d","Type":"ContainerDied","Data":"3b40af1e96cdcb8339e13ea2f0bc42de1f591709c1f7469f78be2937e61acc56"} Jan 29 09:12:45 crc kubenswrapper[4771]: I0129 09:12:45.790472 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nfdkd" podStartSLOduration=2.953984165 podStartE2EDuration="6.790447611s" podCreationTimestamp="2026-01-29 09:12:39 +0000 UTC" firstStartedPulling="2026-01-29 09:12:40.686831035 +0000 UTC m=+380.809671262" lastFinishedPulling="2026-01-29 09:12:44.523294471 +0000 UTC m=+384.646134708" observedRunningTime="2026-01-29 09:12:45.787180565 +0000 UTC m=+385.910020802" watchObservedRunningTime="2026-01-29 09:12:45.790447611 +0000 UTC m=+385.913287838" Jan 29 09:12:46 crc kubenswrapper[4771]: I0129 09:12:46.754068 4771 generic.go:334] "Generic (PLEG): container finished" podID="c03f3394-a05b-4de0-ba06-1191b58d6fa8" containerID="40a304124dacd2605b81279d301cb49f2c165445971a939f504f3cdb9467730b" exitCode=0 Jan 29 09:12:46 crc kubenswrapper[4771]: I0129 09:12:46.754151 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tk6z" event={"ID":"c03f3394-a05b-4de0-ba06-1191b58d6fa8","Type":"ContainerDied","Data":"40a304124dacd2605b81279d301cb49f2c165445971a939f504f3cdb9467730b"} Jan 29 09:12:46 crc kubenswrapper[4771]: I0129 09:12:46.757237 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sss7k" event={"ID":"48ca40d1-6622-443e-b79e-fe6896d2f66d","Type":"ContainerStarted","Data":"299008f61508a8abc2f9fb8137c09e1eb23f550f9c49e9633527a8c828f670e2"} Jan 29 09:12:46 crc kubenswrapper[4771]: I0129 09:12:46.793598 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sss7k" podStartSLOduration=2.028500736 podStartE2EDuration="4.793576575s" podCreationTimestamp="2026-01-29 09:12:42 +0000 UTC" firstStartedPulling="2026-01-29 09:12:43.721422759 +0000 UTC m=+383.844262986" lastFinishedPulling="2026-01-29 09:12:46.486498598 +0000 UTC m=+386.609338825" observedRunningTime="2026-01-29 09:12:46.791757382 +0000 UTC m=+386.914597619" watchObservedRunningTime="2026-01-29 09:12:46.793576575 +0000 UTC m=+386.916416802" Jan 29 09:12:47 crc kubenswrapper[4771]: I0129 09:12:47.764653 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tk6z" event={"ID":"c03f3394-a05b-4de0-ba06-1191b58d6fa8","Type":"ContainerStarted","Data":"e6682c2f1f0338c8c4614cefd93f258a755c87ef1777e56f6ef473a63935ca12"} Jan 29 09:12:47 crc kubenswrapper[4771]: I0129 09:12:47.788799 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4tk6z" podStartSLOduration=3.28255282 podStartE2EDuration="6.788776388s" podCreationTimestamp="2026-01-29 09:12:41 +0000 UTC" firstStartedPulling="2026-01-29 09:12:43.71495499 +0000 UTC m=+383.837795257" lastFinishedPulling="2026-01-29 09:12:47.221178598 +0000 UTC m=+387.344018825" observedRunningTime="2026-01-29 09:12:47.784537084 +0000 UTC m=+387.907377331" watchObservedRunningTime="2026-01-29 09:12:47.788776388 +0000 UTC m=+387.911616615" Jan 29 09:12:49 crc kubenswrapper[4771]: I0129 09:12:49.694239 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:49 crc kubenswrapper[4771]: I0129 09:12:49.695073 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:49 crc kubenswrapper[4771]: I0129 09:12:49.735282 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:49 crc kubenswrapper[4771]: I0129 09:12:49.817806 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:12:49 crc kubenswrapper[4771]: I0129 09:12:49.889568 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:49 crc kubenswrapper[4771]: I0129 09:12:49.889652 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:49 crc kubenswrapper[4771]: I0129 09:12:49.928961 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:50 crc kubenswrapper[4771]: I0129 09:12:50.831142 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nfdkd" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.495344 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6c2t4"] Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.496452 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.520916 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6c2t4"] Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.610364 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-trusted-ca\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.610739 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-registry-tls\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.610861 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-bound-sa-token\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.610970 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.611099 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-registry-certificates\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.611193 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.611309 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.611449 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhvc\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-kube-api-access-9fhvc\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.640316 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.712886 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.713579 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fhvc\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-kube-api-access-9fhvc\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.713815 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-trusted-ca\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.713954 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-registry-tls\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.714093 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-bound-sa-token\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.714241 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-registry-certificates\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.714411 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.715056 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.715823 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-registry-certificates\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.716148 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-trusted-ca\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.720087 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-registry-tls\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.720761 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.732288 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fhvc\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-kube-api-access-9fhvc\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.734395 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94a522e6-07e6-440d-8dbd-9b5ea1203dd7-bound-sa-token\") pod \"image-registry-66df7c8f76-6c2t4\" (UID: \"94a522e6-07e6-440d-8dbd-9b5ea1203dd7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:51 crc kubenswrapper[4771]: I0129 09:12:51.816348 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:52 crc kubenswrapper[4771]: I0129 09:12:52.090032 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:52 crc kubenswrapper[4771]: I0129 09:12:52.090515 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:12:52 crc kubenswrapper[4771]: I0129 09:12:52.295919 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6c2t4"] Jan 29 09:12:52 crc kubenswrapper[4771]: I0129 09:12:52.521594 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:52 crc kubenswrapper[4771]: I0129 09:12:52.521711 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:52 crc kubenswrapper[4771]: I0129 09:12:52.564328 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:52 crc kubenswrapper[4771]: I0129 09:12:52.796124 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" event={"ID":"94a522e6-07e6-440d-8dbd-9b5ea1203dd7","Type":"ContainerStarted","Data":"b11d5ddf8c294fe1f74acd92cde57ac118d045109debec8fb07cfe2a7e1fad26"} Jan 29 09:12:52 crc kubenswrapper[4771]: I0129 09:12:52.847997 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sss7k" Jan 29 09:12:53 crc kubenswrapper[4771]: I0129 09:12:53.133526 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4tk6z" podUID="c03f3394-a05b-4de0-ba06-1191b58d6fa8" containerName="registry-server" probeResult="failure" output=< Jan 29 09:12:53 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:12:53 crc kubenswrapper[4771]: > Jan 29 09:12:53 crc kubenswrapper[4771]: I0129 09:12:53.805648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" event={"ID":"94a522e6-07e6-440d-8dbd-9b5ea1203dd7","Type":"ContainerStarted","Data":"b1f9ab33cb21ac398418fe702512a0178bd3fc0d8ed377296bb90117d96e0973"} Jan 29 09:12:53 crc kubenswrapper[4771]: I0129 09:12:53.830230 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" podStartSLOduration=2.830208837 podStartE2EDuration="2.830208837s" podCreationTimestamp="2026-01-29 09:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:12:53.825104442 +0000 UTC m=+393.947944689" watchObservedRunningTime="2026-01-29 09:12:53.830208837 +0000 UTC m=+393.953049064" Jan 29 09:12:54 crc kubenswrapper[4771]: I0129 09:12:54.811284 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:12:56 crc kubenswrapper[4771]: I0129 09:12:56.705470 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:12:56 crc kubenswrapper[4771]: I0129 09:12:56.705926 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:12:56 crc kubenswrapper[4771]: I0129 09:12:56.711447 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:12:56 crc kubenswrapper[4771]: I0129 09:12:56.775212 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:12:56 crc kubenswrapper[4771]: I0129 09:12:56.939027 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 09:12:57 crc kubenswrapper[4771]: W0129 09:12:57.343131 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-50bb1573362918f360396bcf602923e6e8b48ba1791b8dbe79023b41480e5e11 WatchSource:0}: Error finding container 50bb1573362918f360396bcf602923e6e8b48ba1791b8dbe79023b41480e5e11: Status 404 returned error can't find the container with id 50bb1573362918f360396bcf602923e6e8b48ba1791b8dbe79023b41480e5e11 Jan 29 09:12:57 crc kubenswrapper[4771]: I0129 09:12:57.720577 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:12:57 crc kubenswrapper[4771]: I0129 09:12:57.720754 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:12:57 crc kubenswrapper[4771]: I0129 09:12:57.725032 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:12:57 crc kubenswrapper[4771]: I0129 09:12:57.725239 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:12:57 crc kubenswrapper[4771]: I0129 09:12:57.738783 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:12:57 crc kubenswrapper[4771]: I0129 09:12:57.829674 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b4a789e98ae4eb73cd89f6ad8a9eca4e3d921dcef66e5a47ee636a19be05ffb7"} Jan 29 09:12:57 crc kubenswrapper[4771]: I0129 09:12:57.829740 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"50bb1573362918f360396bcf602923e6e8b48ba1791b8dbe79023b41480e5e11"} Jan 29 09:12:57 crc kubenswrapper[4771]: I0129 09:12:57.839093 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 09:12:57 crc kubenswrapper[4771]: W0129 09:12:57.994352 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-72b08b248ede57fdf1d22c1214ffe9b924dec61bb2bd0f07040de8f133358412 WatchSource:0}: Error finding container 72b08b248ede57fdf1d22c1214ffe9b924dec61bb2bd0f07040de8f133358412: Status 404 returned error can't find the container with id 72b08b248ede57fdf1d22c1214ffe9b924dec61bb2bd0f07040de8f133358412 Jan 29 09:12:58 crc kubenswrapper[4771]: I0129 09:12:58.845123 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fbb1b94266fda1f690710b04cca65b3f3506d37fe91e3e64be6df0406a3ea2e0"} Jan 29 09:12:58 crc kubenswrapper[4771]: I0129 09:12:58.845570 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7ae70149d4d807a46d467da0d149490a2124afa8e3a18ec689bbddf80d945eb2"} Jan 29 09:12:58 crc kubenswrapper[4771]: I0129 09:12:58.845593 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e292ad576106afb56bee1bd380426be78f716de2b8d0ab11b926d5ef1703ed4a"} Jan 29 09:12:58 crc kubenswrapper[4771]: I0129 09:12:58.845605 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"72b08b248ede57fdf1d22c1214ffe9b924dec61bb2bd0f07040de8f133358412"} Jan 29 09:12:58 crc kubenswrapper[4771]: I0129 09:12:58.845895 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:12:59 crc kubenswrapper[4771]: I0129 09:12:59.860025 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-68k8j"] Jan 29 09:12:59 crc kubenswrapper[4771]: I0129 09:12:59.860251 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" podUID="aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" containerName="controller-manager" containerID="cri-o://2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6" gracePeriod=30 Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.312778 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.460568 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-serving-cert\") pod \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.460672 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-proxy-ca-bundles\") pod \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.460823 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59dj\" (UniqueName: \"kubernetes.io/projected/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-kube-api-access-k59dj\") pod \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.460855 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-client-ca\") pod \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.460915 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-config\") pod \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\" (UID: \"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb\") " Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.461812 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" (UID: "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.461969 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-config" (OuterVolumeSpecName: "config") pod "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" (UID: "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.462295 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" (UID: "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.466364 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" (UID: "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.468343 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-kube-api-access-k59dj" (OuterVolumeSpecName: "kube-api-access-k59dj") pod "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" (UID: "aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb"). InnerVolumeSpecName "kube-api-access-k59dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.562843 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.562882 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k59dj\" (UniqueName: \"kubernetes.io/projected/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-kube-api-access-k59dj\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.562899 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.562912 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.562922 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.854310 4771 generic.go:334] "Generic (PLEG): container finished" podID="aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" containerID="2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6" exitCode=0 Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.854668 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" event={"ID":"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb","Type":"ContainerDied","Data":"2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6"} Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.854828 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" event={"ID":"aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb","Type":"ContainerDied","Data":"a8dcbc81ecbf735609d19955ffcb50c3b78e01617a32d6ee113af4d313417206"} Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.854756 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5964cbcb45-68k8j" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.854961 4771 scope.go:117] "RemoveContainer" containerID="2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.879467 4771 scope.go:117] "RemoveContainer" containerID="2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6" Jan 29 09:13:00 crc kubenswrapper[4771]: E0129 09:13:00.880069 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6\": container with ID starting with 2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6 not found: ID does not exist" containerID="2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.880119 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6"} err="failed to get container status \"2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6\": rpc error: code = NotFound desc = could not find container \"2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6\": container with ID starting with 2640402aaedd36ba0f81ccd239c0d4ceedb5249c6cb14ab77e45ac4a19eaf0a6 not found: ID does not exist" Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.886981 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-68k8j"] Jan 29 09:13:00 crc kubenswrapper[4771]: I0129 09:13:00.896735 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-68k8j"] Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.086654 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c6bc857cd-blxx4"] Jan 29 09:13:01 crc kubenswrapper[4771]: E0129 09:13:01.087046 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" containerName="controller-manager" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.087062 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" containerName="controller-manager" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.087181 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" containerName="controller-manager" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.087665 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.090467 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.090838 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.095010 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.095079 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.095845 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.096498 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.106656 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6bc857cd-blxx4"] Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.110014 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.271611 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-proxy-ca-bundles\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.271676 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd5e616-ef7e-48b3-a893-105d69958825-serving-cert\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.271943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-config\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.272032 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-client-ca\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.272085 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mdcj\" (UniqueName: \"kubernetes.io/projected/fdd5e616-ef7e-48b3-a893-105d69958825-kube-api-access-8mdcj\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.372955 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-proxy-ca-bundles\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.373021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd5e616-ef7e-48b3-a893-105d69958825-serving-cert\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.373093 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-config\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.373135 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-client-ca\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.373161 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mdcj\" (UniqueName: \"kubernetes.io/projected/fdd5e616-ef7e-48b3-a893-105d69958825-kube-api-access-8mdcj\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.374423 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-client-ca\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.374482 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-proxy-ca-bundles\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.374840 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd5e616-ef7e-48b3-a893-105d69958825-config\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.384002 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd5e616-ef7e-48b3-a893-105d69958825-serving-cert\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.392076 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mdcj\" (UniqueName: \"kubernetes.io/projected/fdd5e616-ef7e-48b3-a893-105d69958825-kube-api-access-8mdcj\") pod \"controller-manager-5c6bc857cd-blxx4\" (UID: \"fdd5e616-ef7e-48b3-a893-105d69958825\") " pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.405244 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.606830 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6bc857cd-blxx4"] Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.862793 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" event={"ID":"fdd5e616-ef7e-48b3-a893-105d69958825","Type":"ContainerStarted","Data":"4586e298da08addb42315b5f628da4c7937487a7539e8785582e40f778ec972c"} Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.862854 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" event={"ID":"fdd5e616-ef7e-48b3-a893-105d69958825","Type":"ContainerStarted","Data":"eab37153f2b904aab9de38e681344e47bf178d59b97753f85017567dc337af65"} Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.863090 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.864539 4771 patch_prober.go:28] interesting pod/controller-manager-5c6bc857cd-blxx4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.864730 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" podUID="fdd5e616-ef7e-48b3-a893-105d69958825" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 29 09:13:01 crc kubenswrapper[4771]: I0129 09:13:01.884901 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" podStartSLOduration=2.884878504 podStartE2EDuration="2.884878504s" podCreationTimestamp="2026-01-29 09:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:13:01.880813528 +0000 UTC m=+402.003653755" watchObservedRunningTime="2026-01-29 09:13:01.884878504 +0000 UTC m=+402.007718741" Jan 29 09:13:02 crc kubenswrapper[4771]: I0129 09:13:02.146193 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:13:02 crc kubenswrapper[4771]: I0129 09:13:02.200261 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4tk6z" Jan 29 09:13:02 crc kubenswrapper[4771]: I0129 09:13:02.844233 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb" path="/var/lib/kubelet/pods/aea19fd3-0c04-4b9c-9ebd-51c010f9b3cb/volumes" Jan 29 09:13:02 crc kubenswrapper[4771]: I0129 09:13:02.873366 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c6bc857cd-blxx4" Jan 29 09:13:11 crc kubenswrapper[4771]: I0129 09:13:11.824740 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6c2t4" Jan 29 09:13:11 crc kubenswrapper[4771]: I0129 09:13:11.886352 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jcdkc"] Jan 29 09:13:14 crc kubenswrapper[4771]: I0129 09:13:14.271593 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:13:14 crc kubenswrapper[4771]: I0129 09:13:14.272083 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:13:36 crc kubenswrapper[4771]: I0129 09:13:36.935024 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" podUID="fe9ebbbe-af6e-409d-8039-db5fb66d062b" containerName="registry" containerID="cri-o://edb46a5a5030b5933ad0c787ad595f43f8cc1719b2f40e3d7eef76dcb5e0ff17" gracePeriod=30 Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.071307 4771 generic.go:334] "Generic (PLEG): container finished" podID="fe9ebbbe-af6e-409d-8039-db5fb66d062b" containerID="edb46a5a5030b5933ad0c787ad595f43f8cc1719b2f40e3d7eef76dcb5e0ff17" exitCode=0 Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.071398 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" event={"ID":"fe9ebbbe-af6e-409d-8039-db5fb66d062b","Type":"ContainerDied","Data":"edb46a5a5030b5933ad0c787ad595f43f8cc1719b2f40e3d7eef76dcb5e0ff17"} Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.634493 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.772180 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn7z9\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-kube-api-access-gn7z9\") pod \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.772329 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-trusted-ca\") pod \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.772390 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe9ebbbe-af6e-409d-8039-db5fb66d062b-ca-trust-extracted\") pod \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.772451 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-tls\") pod \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.772486 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-bound-sa-token\") pod \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.772793 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.772891 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-certificates\") pod \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.772995 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe9ebbbe-af6e-409d-8039-db5fb66d062b-installation-pull-secrets\") pod \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\" (UID: \"fe9ebbbe-af6e-409d-8039-db5fb66d062b\") " Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.774645 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "fe9ebbbe-af6e-409d-8039-db5fb66d062b" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.774726 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fe9ebbbe-af6e-409d-8039-db5fb66d062b" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.781511 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "fe9ebbbe-af6e-409d-8039-db5fb66d062b" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.782886 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe9ebbbe-af6e-409d-8039-db5fb66d062b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "fe9ebbbe-af6e-409d-8039-db5fb66d062b" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.783934 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "fe9ebbbe-af6e-409d-8039-db5fb66d062b" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.787782 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.792660 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "fe9ebbbe-af6e-409d-8039-db5fb66d062b" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.802423 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-kube-api-access-gn7z9" (OuterVolumeSpecName: "kube-api-access-gn7z9") pod "fe9ebbbe-af6e-409d-8039-db5fb66d062b" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b"). InnerVolumeSpecName "kube-api-access-gn7z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.805720 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe9ebbbe-af6e-409d-8039-db5fb66d062b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "fe9ebbbe-af6e-409d-8039-db5fb66d062b" (UID: "fe9ebbbe-af6e-409d-8039-db5fb66d062b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.875736 4771 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.875795 4771 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.875811 4771 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe9ebbbe-af6e-409d-8039-db5fb66d062b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.875822 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn7z9\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-kube-api-access-gn7z9\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.875834 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe9ebbbe-af6e-409d-8039-db5fb66d062b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.875845 4771 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe9ebbbe-af6e-409d-8039-db5fb66d062b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[4771]: I0129 09:13:37.875857 4771 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe9ebbbe-af6e-409d-8039-db5fb66d062b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:38 crc kubenswrapper[4771]: I0129 09:13:38.081897 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" event={"ID":"fe9ebbbe-af6e-409d-8039-db5fb66d062b","Type":"ContainerDied","Data":"048db9a5bac80cd495a14b5af624f140ff7677fad33da7f6108aa8cdceaa81d2"} Jan 29 09:13:38 crc kubenswrapper[4771]: I0129 09:13:38.082024 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:13:38 crc kubenswrapper[4771]: I0129 09:13:38.083224 4771 scope.go:117] "RemoveContainer" containerID="edb46a5a5030b5933ad0c787ad595f43f8cc1719b2f40e3d7eef76dcb5e0ff17" Jan 29 09:13:44 crc kubenswrapper[4771]: I0129 09:13:44.271215 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:13:44 crc kubenswrapper[4771]: I0129 09:13:44.271973 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:13:44 crc kubenswrapper[4771]: I0129 09:13:44.272031 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:13:44 crc kubenswrapper[4771]: I0129 09:13:44.273927 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ca9b32e9e3f67b0d55af7b6b208d87b9e276a6c04ca2e9eb6a9b10aa7344d4f"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:13:44 crc kubenswrapper[4771]: I0129 09:13:44.273999 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://7ca9b32e9e3f67b0d55af7b6b208d87b9e276a6c04ca2e9eb6a9b10aa7344d4f" gracePeriod=600 Jan 29 09:13:45 crc kubenswrapper[4771]: I0129 09:13:45.129376 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="7ca9b32e9e3f67b0d55af7b6b208d87b9e276a6c04ca2e9eb6a9b10aa7344d4f" exitCode=0 Jan 29 09:13:45 crc kubenswrapper[4771]: I0129 09:13:45.129545 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"7ca9b32e9e3f67b0d55af7b6b208d87b9e276a6c04ca2e9eb6a9b10aa7344d4f"} Jan 29 09:13:45 crc kubenswrapper[4771]: I0129 09:13:45.129850 4771 scope.go:117] "RemoveContainer" containerID="f08bd8217ccc538afd8c9acd07793b6ee17dd014ea45fa9f127f43ccb95b5b74" Jan 29 09:13:46 crc kubenswrapper[4771]: I0129 09:13:46.138782 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"4424deee2a2b6ea98b4730414a111f836bf44a867c8b22ec3e4343e7aa010238"} Jan 29 09:14:09 crc kubenswrapper[4771]: I0129 09:14:09.129019 4771 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podfe9ebbbe-af6e-409d-8039-db5fb66d062b"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podfe9ebbbe-af6e-409d-8039-db5fb66d062b] : Timed out while waiting for systemd to remove kubepods-burstable-podfe9ebbbe_af6e_409d_8039_db5fb66d062b.slice" Jan 29 09:14:09 crc kubenswrapper[4771]: E0129 09:14:09.129879 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podfe9ebbbe-af6e-409d-8039-db5fb66d062b] : unable to destroy cgroup paths for cgroup [kubepods burstable podfe9ebbbe-af6e-409d-8039-db5fb66d062b] : Timed out while waiting for systemd to remove kubepods-burstable-podfe9ebbbe_af6e_409d_8039_db5fb66d062b.slice" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" podUID="fe9ebbbe-af6e-409d-8039-db5fb66d062b" Jan 29 09:14:09 crc kubenswrapper[4771]: I0129 09:14:09.285847 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jcdkc" Jan 29 09:14:09 crc kubenswrapper[4771]: I0129 09:14:09.313745 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jcdkc"] Jan 29 09:14:09 crc kubenswrapper[4771]: I0129 09:14:09.318048 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jcdkc"] Jan 29 09:14:10 crc kubenswrapper[4771]: I0129 09:14:10.845231 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe9ebbbe-af6e-409d-8039-db5fb66d062b" path="/var/lib/kubelet/pods/fe9ebbbe-af6e-409d-8039-db5fb66d062b/volumes" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.190787 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h"] Jan 29 09:15:00 crc kubenswrapper[4771]: E0129 09:15:00.191745 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9ebbbe-af6e-409d-8039-db5fb66d062b" containerName="registry" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.191763 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9ebbbe-af6e-409d-8039-db5fb66d062b" containerName="registry" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.191925 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe9ebbbe-af6e-409d-8039-db5fb66d062b" containerName="registry" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.193205 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.196186 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.197185 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.200671 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h"] Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.347804 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/941e9ea1-4df6-4b0c-937b-af75139aeb0f-secret-volume\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.348246 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/941e9ea1-4df6-4b0c-937b-af75139aeb0f-config-volume\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.348299 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqn97\" (UniqueName: \"kubernetes.io/projected/941e9ea1-4df6-4b0c-937b-af75139aeb0f-kube-api-access-kqn97\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.449787 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/941e9ea1-4df6-4b0c-937b-af75139aeb0f-secret-volume\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.450212 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/941e9ea1-4df6-4b0c-937b-af75139aeb0f-config-volume\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.450997 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqn97\" (UniqueName: \"kubernetes.io/projected/941e9ea1-4df6-4b0c-937b-af75139aeb0f-kube-api-access-kqn97\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.451532 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/941e9ea1-4df6-4b0c-937b-af75139aeb0f-config-volume\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.459390 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/941e9ea1-4df6-4b0c-937b-af75139aeb0f-secret-volume\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.469457 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqn97\" (UniqueName: \"kubernetes.io/projected/941e9ea1-4df6-4b0c-937b-af75139aeb0f-kube-api-access-kqn97\") pod \"collect-profiles-29494635-vtc4h\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.520299 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:00 crc kubenswrapper[4771]: I0129 09:15:00.959275 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h"] Jan 29 09:15:01 crc kubenswrapper[4771]: I0129 09:15:01.600843 4771 generic.go:334] "Generic (PLEG): container finished" podID="941e9ea1-4df6-4b0c-937b-af75139aeb0f" containerID="ed76971758709e75637c869d09158601251e108a263b6e528702706bc6820b78" exitCode=0 Jan 29 09:15:01 crc kubenswrapper[4771]: I0129 09:15:01.600930 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" event={"ID":"941e9ea1-4df6-4b0c-937b-af75139aeb0f","Type":"ContainerDied","Data":"ed76971758709e75637c869d09158601251e108a263b6e528702706bc6820b78"} Jan 29 09:15:01 crc kubenswrapper[4771]: I0129 09:15:01.601014 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" event={"ID":"941e9ea1-4df6-4b0c-937b-af75139aeb0f","Type":"ContainerStarted","Data":"5d50de74a8b3ead52d25ed7e39542dc1161f16494b375962950188e6e3b6ed96"} Jan 29 09:15:02 crc kubenswrapper[4771]: I0129 09:15:02.860327 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:02 crc kubenswrapper[4771]: I0129 09:15:02.985792 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/941e9ea1-4df6-4b0c-937b-af75139aeb0f-config-volume\") pod \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " Jan 29 09:15:02 crc kubenswrapper[4771]: I0129 09:15:02.986020 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/941e9ea1-4df6-4b0c-937b-af75139aeb0f-secret-volume\") pod \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " Jan 29 09:15:02 crc kubenswrapper[4771]: I0129 09:15:02.986073 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqn97\" (UniqueName: \"kubernetes.io/projected/941e9ea1-4df6-4b0c-937b-af75139aeb0f-kube-api-access-kqn97\") pod \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\" (UID: \"941e9ea1-4df6-4b0c-937b-af75139aeb0f\") " Jan 29 09:15:02 crc kubenswrapper[4771]: I0129 09:15:02.986887 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941e9ea1-4df6-4b0c-937b-af75139aeb0f-config-volume" (OuterVolumeSpecName: "config-volume") pod "941e9ea1-4df6-4b0c-937b-af75139aeb0f" (UID: "941e9ea1-4df6-4b0c-937b-af75139aeb0f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:15:02 crc kubenswrapper[4771]: I0129 09:15:02.993091 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941e9ea1-4df6-4b0c-937b-af75139aeb0f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "941e9ea1-4df6-4b0c-937b-af75139aeb0f" (UID: "941e9ea1-4df6-4b0c-937b-af75139aeb0f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:15:02 crc kubenswrapper[4771]: I0129 09:15:02.993878 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941e9ea1-4df6-4b0c-937b-af75139aeb0f-kube-api-access-kqn97" (OuterVolumeSpecName: "kube-api-access-kqn97") pod "941e9ea1-4df6-4b0c-937b-af75139aeb0f" (UID: "941e9ea1-4df6-4b0c-937b-af75139aeb0f"). InnerVolumeSpecName "kube-api-access-kqn97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:03 crc kubenswrapper[4771]: I0129 09:15:03.087864 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqn97\" (UniqueName: \"kubernetes.io/projected/941e9ea1-4df6-4b0c-937b-af75139aeb0f-kube-api-access-kqn97\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:03 crc kubenswrapper[4771]: I0129 09:15:03.087923 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/941e9ea1-4df6-4b0c-937b-af75139aeb0f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:03 crc kubenswrapper[4771]: I0129 09:15:03.087941 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/941e9ea1-4df6-4b0c-937b-af75139aeb0f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:03 crc kubenswrapper[4771]: I0129 09:15:03.615959 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" Jan 29 09:15:03 crc kubenswrapper[4771]: I0129 09:15:03.615876 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h" event={"ID":"941e9ea1-4df6-4b0c-937b-af75139aeb0f","Type":"ContainerDied","Data":"5d50de74a8b3ead52d25ed7e39542dc1161f16494b375962950188e6e3b6ed96"} Jan 29 09:15:03 crc kubenswrapper[4771]: I0129 09:15:03.616382 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d50de74a8b3ead52d25ed7e39542dc1161f16494b375962950188e6e3b6ed96" Jan 29 09:15:20 crc kubenswrapper[4771]: I0129 09:15:20.963274 4771 scope.go:117] "RemoveContainer" containerID="777eb793c7ff67c99090a65804fcec98ca671c71d29e69be15f3f51a120283ce" Jan 29 09:15:20 crc kubenswrapper[4771]: I0129 09:15:20.983306 4771 scope.go:117] "RemoveContainer" containerID="18a8aa64171be8a556b9d7999fd79b88ad8fd53cffb0fb169535113ce4db9987" Jan 29 09:15:21 crc kubenswrapper[4771]: I0129 09:15:21.005926 4771 scope.go:117] "RemoveContainer" containerID="0a78765b621fa413b642e9e387c847b7457754220495cfe7fe72ea2816bf0cca" Jan 29 09:16:14 crc kubenswrapper[4771]: I0129 09:16:14.271646 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:16:14 crc kubenswrapper[4771]: I0129 09:16:14.272487 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:16:21 crc kubenswrapper[4771]: I0129 09:16:21.043651 4771 scope.go:117] "RemoveContainer" containerID="844efa433b50671edd80cb7060225fa105bf8a07c3dd620d8578f637342370b1" Jan 29 09:16:21 crc kubenswrapper[4771]: I0129 09:16:21.073506 4771 scope.go:117] "RemoveContainer" containerID="518e7542100d2c99babcd1482df0d3b25ba44a7af83d11194f82c993f2eebd77" Jan 29 09:16:21 crc kubenswrapper[4771]: I0129 09:16:21.091772 4771 scope.go:117] "RemoveContainer" containerID="3803be0b23694a01f9c882593f656e6f0b8919bf5d8df1577cb28f9c7f1a413d" Jan 29 09:16:21 crc kubenswrapper[4771]: I0129 09:16:21.111023 4771 scope.go:117] "RemoveContainer" containerID="c113c248255744503a99f0eb381121c1b25eb87ac06c9ed753792efbe0212783" Jan 29 09:16:44 crc kubenswrapper[4771]: I0129 09:16:44.271341 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:16:44 crc kubenswrapper[4771]: I0129 09:16:44.272234 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:17:14 crc kubenswrapper[4771]: I0129 09:17:14.271117 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:17:14 crc kubenswrapper[4771]: I0129 09:17:14.271892 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:17:14 crc kubenswrapper[4771]: I0129 09:17:14.271944 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:17:14 crc kubenswrapper[4771]: I0129 09:17:14.272603 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4424deee2a2b6ea98b4730414a111f836bf44a867c8b22ec3e4343e7aa010238"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:17:14 crc kubenswrapper[4771]: I0129 09:17:14.272663 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://4424deee2a2b6ea98b4730414a111f836bf44a867c8b22ec3e4343e7aa010238" gracePeriod=600 Jan 29 09:17:14 crc kubenswrapper[4771]: I0129 09:17:14.525077 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="4424deee2a2b6ea98b4730414a111f836bf44a867c8b22ec3e4343e7aa010238" exitCode=0 Jan 29 09:17:14 crc kubenswrapper[4771]: I0129 09:17:14.525195 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"4424deee2a2b6ea98b4730414a111f836bf44a867c8b22ec3e4343e7aa010238"} Jan 29 09:17:14 crc kubenswrapper[4771]: I0129 09:17:14.526084 4771 scope.go:117] "RemoveContainer" containerID="7ca9b32e9e3f67b0d55af7b6b208d87b9e276a6c04ca2e9eb6a9b10aa7344d4f" Jan 29 09:17:15 crc kubenswrapper[4771]: I0129 09:17:15.535669 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"03920492f3ef5aedc2a41c61dc5f9a95c03384d306c3153f2e8b409334342291"} Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.416285 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt"] Jan 29 09:17:44 crc kubenswrapper[4771]: E0129 09:17:44.419131 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941e9ea1-4df6-4b0c-937b-af75139aeb0f" containerName="collect-profiles" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.419171 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="941e9ea1-4df6-4b0c-937b-af75139aeb0f" containerName="collect-profiles" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.419278 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="941e9ea1-4df6-4b0c-937b-af75139aeb0f" containerName="collect-profiles" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.419797 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.424346 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.424747 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.425046 4771 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-skvl7" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.437753 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt"] Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.442966 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-dds85"] Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.444532 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dds85" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.449613 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-knhgs"] Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.450011 4771 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-w2qkn" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.452943 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.454593 4771 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ltj69" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.459310 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dds85"] Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.479273 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-knhgs"] Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.602414 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwvph\" (UniqueName: \"kubernetes.io/projected/fd41faef-aa84-4754-8dc1-36aeafc1e4c3-kube-api-access-bwvph\") pod \"cert-manager-webhook-687f57d79b-knhgs\" (UID: \"fd41faef-aa84-4754-8dc1-36aeafc1e4c3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.602492 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxstl\" (UniqueName: \"kubernetes.io/projected/0eb51574-328d-4156-aa8d-50355bb9d9c2-kube-api-access-sxstl\") pod \"cert-manager-858654f9db-dds85\" (UID: \"0eb51574-328d-4156-aa8d-50355bb9d9c2\") " pod="cert-manager/cert-manager-858654f9db-dds85" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.602525 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psx2s\" (UniqueName: \"kubernetes.io/projected/b8b4db5a-9eaa-4640-8031-185eede7de9b-kube-api-access-psx2s\") pod \"cert-manager-cainjector-cf98fcc89-7r6xt\" (UID: \"b8b4db5a-9eaa-4640-8031-185eede7de9b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.703617 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvph\" (UniqueName: \"kubernetes.io/projected/fd41faef-aa84-4754-8dc1-36aeafc1e4c3-kube-api-access-bwvph\") pod \"cert-manager-webhook-687f57d79b-knhgs\" (UID: \"fd41faef-aa84-4754-8dc1-36aeafc1e4c3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.703716 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxstl\" (UniqueName: \"kubernetes.io/projected/0eb51574-328d-4156-aa8d-50355bb9d9c2-kube-api-access-sxstl\") pod \"cert-manager-858654f9db-dds85\" (UID: \"0eb51574-328d-4156-aa8d-50355bb9d9c2\") " pod="cert-manager/cert-manager-858654f9db-dds85" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.703749 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psx2s\" (UniqueName: \"kubernetes.io/projected/b8b4db5a-9eaa-4640-8031-185eede7de9b-kube-api-access-psx2s\") pod \"cert-manager-cainjector-cf98fcc89-7r6xt\" (UID: \"b8b4db5a-9eaa-4640-8031-185eede7de9b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.731321 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvph\" (UniqueName: \"kubernetes.io/projected/fd41faef-aa84-4754-8dc1-36aeafc1e4c3-kube-api-access-bwvph\") pod \"cert-manager-webhook-687f57d79b-knhgs\" (UID: \"fd41faef-aa84-4754-8dc1-36aeafc1e4c3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.731336 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxstl\" (UniqueName: \"kubernetes.io/projected/0eb51574-328d-4156-aa8d-50355bb9d9c2-kube-api-access-sxstl\") pod \"cert-manager-858654f9db-dds85\" (UID: \"0eb51574-328d-4156-aa8d-50355bb9d9c2\") " pod="cert-manager/cert-manager-858654f9db-dds85" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.733440 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psx2s\" (UniqueName: \"kubernetes.io/projected/b8b4db5a-9eaa-4640-8031-185eede7de9b-kube-api-access-psx2s\") pod \"cert-manager-cainjector-cf98fcc89-7r6xt\" (UID: \"b8b4db5a-9eaa-4640-8031-185eede7de9b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.747093 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.772415 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dds85" Jan 29 09:17:44 crc kubenswrapper[4771]: I0129 09:17:44.786298 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" Jan 29 09:17:45 crc kubenswrapper[4771]: I0129 09:17:45.056592 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dds85"] Jan 29 09:17:45 crc kubenswrapper[4771]: I0129 09:17:45.069024 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:17:45 crc kubenswrapper[4771]: I0129 09:17:45.089384 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-knhgs"] Jan 29 09:17:45 crc kubenswrapper[4771]: W0129 09:17:45.091499 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd41faef_aa84_4754_8dc1_36aeafc1e4c3.slice/crio-2cac0040b3f64f45fd9b7e4ce013e55702c273a807db129823926ad8b7c8ac0b WatchSource:0}: Error finding container 2cac0040b3f64f45fd9b7e4ce013e55702c273a807db129823926ad8b7c8ac0b: Status 404 returned error can't find the container with id 2cac0040b3f64f45fd9b7e4ce013e55702c273a807db129823926ad8b7c8ac0b Jan 29 09:17:45 crc kubenswrapper[4771]: I0129 09:17:45.215753 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt"] Jan 29 09:17:45 crc kubenswrapper[4771]: I0129 09:17:45.717269 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" event={"ID":"fd41faef-aa84-4754-8dc1-36aeafc1e4c3","Type":"ContainerStarted","Data":"2cac0040b3f64f45fd9b7e4ce013e55702c273a807db129823926ad8b7c8ac0b"} Jan 29 09:17:45 crc kubenswrapper[4771]: I0129 09:17:45.718272 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dds85" event={"ID":"0eb51574-328d-4156-aa8d-50355bb9d9c2","Type":"ContainerStarted","Data":"cf0ab496b37f4a74a42278cc431c52ab1c9f0c7280688ef3e5c6ba5085b3d6bc"} Jan 29 09:17:45 crc kubenswrapper[4771]: I0129 09:17:45.719502 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt" event={"ID":"b8b4db5a-9eaa-4640-8031-185eede7de9b","Type":"ContainerStarted","Data":"aa24daf25da1485246ad8967f329f51bb700ffb1313d5bdc43490fffee916e1c"} Jan 29 09:17:50 crc kubenswrapper[4771]: I0129 09:17:50.750858 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt" event={"ID":"b8b4db5a-9eaa-4640-8031-185eede7de9b","Type":"ContainerStarted","Data":"7ce9ac9804eb84857b8fb8a20423917efa01ac9c0b1bd504f587445e5cd13d0d"} Jan 29 09:17:50 crc kubenswrapper[4771]: I0129 09:17:50.779341 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7r6xt" podStartSLOduration=1.933391168 podStartE2EDuration="6.779320575s" podCreationTimestamp="2026-01-29 09:17:44 +0000 UTC" firstStartedPulling="2026-01-29 09:17:45.220712545 +0000 UTC m=+685.343552772" lastFinishedPulling="2026-01-29 09:17:50.066641952 +0000 UTC m=+690.189482179" observedRunningTime="2026-01-29 09:17:50.776253727 +0000 UTC m=+690.899093974" watchObservedRunningTime="2026-01-29 09:17:50.779320575 +0000 UTC m=+690.902160802" Jan 29 09:17:51 crc kubenswrapper[4771]: I0129 09:17:51.760222 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dds85" event={"ID":"0eb51574-328d-4156-aa8d-50355bb9d9c2","Type":"ContainerStarted","Data":"20be510ac64311cc425e6b0ee9b09477ce0df571d8adfe70b07dcedba6a06b53"} Jan 29 09:17:51 crc kubenswrapper[4771]: I0129 09:17:51.762510 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" event={"ID":"fd41faef-aa84-4754-8dc1-36aeafc1e4c3","Type":"ContainerStarted","Data":"f7aca2b594e8184715b80465e681ece374187c3deaed6e44f332041c12a541c9"} Jan 29 09:17:51 crc kubenswrapper[4771]: I0129 09:17:51.762788 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" Jan 29 09:17:51 crc kubenswrapper[4771]: I0129 09:17:51.781548 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-dds85" podStartSLOduration=1.9081661749999999 podStartE2EDuration="7.781529199s" podCreationTimestamp="2026-01-29 09:17:44 +0000 UTC" firstStartedPulling="2026-01-29 09:17:45.068742251 +0000 UTC m=+685.191582478" lastFinishedPulling="2026-01-29 09:17:50.942105265 +0000 UTC m=+691.064945502" observedRunningTime="2026-01-29 09:17:51.77572756 +0000 UTC m=+691.898567837" watchObservedRunningTime="2026-01-29 09:17:51.781529199 +0000 UTC m=+691.904369426" Jan 29 09:17:51 crc kubenswrapper[4771]: I0129 09:17:51.795950 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" podStartSLOduration=2.737436548 podStartE2EDuration="7.795927187s" podCreationTimestamp="2026-01-29 09:17:44 +0000 UTC" firstStartedPulling="2026-01-29 09:17:45.093918014 +0000 UTC m=+685.216758241" lastFinishedPulling="2026-01-29 09:17:50.152408653 +0000 UTC m=+690.275248880" observedRunningTime="2026-01-29 09:17:51.792199331 +0000 UTC m=+691.915039568" watchObservedRunningTime="2026-01-29 09:17:51.795927187 +0000 UTC m=+691.918767414" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.232836 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ntlqb"] Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.233720 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovn-controller" containerID="cri-o://118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42" gracePeriod=30 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.233879 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339" gracePeriod=30 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.233933 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kube-rbac-proxy-node" containerID="cri-o://03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86" gracePeriod=30 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.234026 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="sbdb" containerID="cri-o://7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705" gracePeriod=30 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.233868 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovn-acl-logging" containerID="cri-o://9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2" gracePeriod=30 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.233923 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="northd" containerID="cri-o://49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47" gracePeriod=30 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.233785 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="nbdb" containerID="cri-o://78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba" gracePeriod=30 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.270282 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" containerID="cri-o://9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" gracePeriod=30 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.554119 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/3.log" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.557550 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovn-acl-logging/0.log" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.558070 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovn-controller/0.log" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.558802 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.618745 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncng7"] Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619014 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619038 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619055 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="northd" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619067 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="northd" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619085 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619096 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619105 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovn-acl-logging" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619114 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovn-acl-logging" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619127 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="sbdb" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619136 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="sbdb" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619152 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619162 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619172 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovn-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619181 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovn-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619194 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kube-rbac-proxy-node" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619202 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kube-rbac-proxy-node" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619217 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619225 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619241 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="nbdb" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619267 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="nbdb" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619280 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619289 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619300 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kubecfg-setup" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619310 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kubecfg-setup" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619444 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619462 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619475 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovn-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619487 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="nbdb" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619498 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="sbdb" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619512 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619527 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kube-rbac-proxy-node" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619539 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="northd" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619551 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619562 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovn-acl-logging" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.619762 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619775 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619907 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.619923 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerName="ovnkube-controller" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.622216 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-netd\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641279 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-systemd\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641319 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-env-overrides\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641318 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641335 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-kubelet\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641403 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-ovn-kubernetes\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641410 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641453 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-node-log\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641486 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-openvswitch\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641515 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641528 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-etc-openvswitch\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641544 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-node-log" (OuterVolumeSpecName: "node-log") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641548 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-netns\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641566 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641569 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd469\" (UniqueName: \"kubernetes.io/projected/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-kube-api-access-rd469\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641596 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-systemd-units\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641642 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovn-node-metrics-cert\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641686 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-ovn\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641723 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-log-socket\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641740 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-slash\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641736 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641756 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641822 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-bin\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641846 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-var-lib-openvswitch\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641865 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-config\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641885 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-script-lib\") pod \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\" (UID: \"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5\") " Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642051 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-node-log\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642075 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642095 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xxj5\" (UniqueName: \"kubernetes.io/projected/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-kube-api-access-6xxj5\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642116 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-kubelet\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642129 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-ovn\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642161 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-run-netns\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642176 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-systemd\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642194 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovnkube-config\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642224 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-slash\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642239 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-log-socket\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642260 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-cni-bin\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642288 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-etc-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642318 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642359 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-env-overrides\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642378 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovn-node-metrics-cert\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642399 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-systemd-units\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642417 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-cni-netd\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642436 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovnkube-script-lib\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642456 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642482 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-var-lib-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642540 4771 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642551 4771 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642560 4771 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642571 4771 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642580 4771 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642589 4771 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641776 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642599 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642955 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.643004 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.641790 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642426 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-log-socket" (OuterVolumeSpecName: "log-socket") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642456 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642497 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642534 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-slash" (OuterVolumeSpecName: "host-slash") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.642578 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.643082 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.651222 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.655202 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-kube-api-access-rd469" (OuterVolumeSpecName: "kube-api-access-rd469") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "kube-api-access-rd469". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.662657 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" (UID: "ff7f16f4-439f-4743-b5f2-b9c6f6c346f5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.743959 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-run-netns\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744020 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-systemd\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744040 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovnkube-config\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744073 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-slash\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744096 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-log-socket\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744122 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-cni-bin\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744146 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-etc-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744142 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-run-netns\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744172 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744229 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744270 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-env-overrides\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744296 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovn-node-metrics-cert\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744313 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-systemd-units\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744338 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-cni-netd\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744356 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovnkube-script-lib\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744379 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744402 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-var-lib-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744462 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-node-log\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744497 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744523 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xxj5\" (UniqueName: \"kubernetes.io/projected/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-kube-api-access-6xxj5\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744541 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-kubelet\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744555 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-ovn\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744634 4771 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744645 4771 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744657 4771 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744667 4771 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744679 4771 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744710 4771 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744719 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744727 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744736 4771 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744747 4771 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744757 4771 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744795 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd469\" (UniqueName: \"kubernetes.io/projected/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-kube-api-access-rd469\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744807 4771 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744822 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744851 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-ovn\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744934 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-node-log\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744979 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744980 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744943 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-kubelet\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745005 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-var-lib-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745042 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-systemd-units\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745071 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-log-socket\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745104 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-slash\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745132 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-cni-netd\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.744272 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-run-systemd\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-host-cni-bin\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745340 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-etc-openvswitch\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745362 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovnkube-config\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745596 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovnkube-script-lib\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.745635 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-env-overrides\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.749567 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-ovn-node-metrics-cert\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.761544 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xxj5\" (UniqueName: \"kubernetes.io/projected/c1a549fe-2eb5-480a-91cd-a50235c5f5ae-kube-api-access-6xxj5\") pod \"ovnkube-node-ncng7\" (UID: \"c1a549fe-2eb5-480a-91cd-a50235c5f5ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.795767 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovnkube-controller/3.log" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.799891 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovn-acl-logging/0.log" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800343 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ntlqb_ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/ovn-controller/0.log" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800725 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" exitCode=0 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800752 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705" exitCode=0 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800760 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba" exitCode=0 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800767 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47" exitCode=0 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800775 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339" exitCode=0 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800782 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86" exitCode=0 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800789 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2" exitCode=143 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800796 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" containerID="118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42" exitCode=143 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800867 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800907 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800935 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800949 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800961 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800972 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800983 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.800995 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801001 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801007 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801012 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801017 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801022 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801027 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801034 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801042 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801051 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801058 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801064 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801070 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801076 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801081 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801087 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801092 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801097 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801103 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801110 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801118 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801125 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801130 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801137 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801142 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801148 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801155 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801161 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801166 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801172 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801180 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" event={"ID":"ff7f16f4-439f-4743-b5f2-b9c6f6c346f5","Type":"ContainerDied","Data":"628e5852b78d5cb1a949025b880e6f3b6e5e88212f9df3deef3cf867a9bb474e"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801187 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801194 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801200 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801206 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801211 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801216 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801222 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801227 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801232 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801237 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801251 4771 scope.go:117] "RemoveContainer" containerID="9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.801443 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ntlqb" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.805342 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/2.log" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.806132 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/1.log" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.806198 4771 generic.go:334] "Generic (PLEG): container finished" podID="a46c7969-6ce3-4ba5-a1ab-73bbf487ae73" containerID="6993705b1fbc7657dfd5e05501f942071bb1210c0422fce04cd3c57968286297" exitCode=2 Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.806233 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cfc8z" event={"ID":"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73","Type":"ContainerDied","Data":"6993705b1fbc7657dfd5e05501f942071bb1210c0422fce04cd3c57968286297"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.806266 4771 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3"} Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.806991 4771 scope.go:117] "RemoveContainer" containerID="6993705b1fbc7657dfd5e05501f942071bb1210c0422fce04cd3c57968286297" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.807198 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cfc8z_openshift-multus(a46c7969-6ce3-4ba5-a1ab-73bbf487ae73)\"" pod="openshift-multus/multus-cfc8z" podUID="a46c7969-6ce3-4ba5-a1ab-73bbf487ae73" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.826744 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.850529 4771 scope.go:117] "RemoveContainer" containerID="7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.850960 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ntlqb"] Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.853933 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ntlqb"] Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.875565 4771 scope.go:117] "RemoveContainer" containerID="78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.888503 4771 scope.go:117] "RemoveContainer" containerID="49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.902127 4771 scope.go:117] "RemoveContainer" containerID="1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.914569 4771 scope.go:117] "RemoveContainer" containerID="03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.926595 4771 scope.go:117] "RemoveContainer" containerID="9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.939973 4771 scope.go:117] "RemoveContainer" containerID="118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.940511 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.952428 4771 scope.go:117] "RemoveContainer" containerID="c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254" Jan 29 09:17:54 crc kubenswrapper[4771]: W0129 09:17:54.960826 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a549fe_2eb5_480a_91cd_a50235c5f5ae.slice/crio-d863ef4a7d0a1beb8c8f7670ebc1daa37bb9d29007aad52879f1f9561768487e WatchSource:0}: Error finding container d863ef4a7d0a1beb8c8f7670ebc1daa37bb9d29007aad52879f1f9561768487e: Status 404 returned error can't find the container with id d863ef4a7d0a1beb8c8f7670ebc1daa37bb9d29007aad52879f1f9561768487e Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.969712 4771 scope.go:117] "RemoveContainer" containerID="9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.970143 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": container with ID starting with 9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d not found: ID does not exist" containerID="9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.970192 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} err="failed to get container status \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": rpc error: code = NotFound desc = could not find container \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": container with ID starting with 9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.970218 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.970565 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": container with ID starting with 363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168 not found: ID does not exist" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.970623 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} err="failed to get container status \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": rpc error: code = NotFound desc = could not find container \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": container with ID starting with 363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.970655 4771 scope.go:117] "RemoveContainer" containerID="7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.970915 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": container with ID starting with 7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705 not found: ID does not exist" containerID="7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.970944 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} err="failed to get container status \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": rpc error: code = NotFound desc = could not find container \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": container with ID starting with 7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.970982 4771 scope.go:117] "RemoveContainer" containerID="78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.971352 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": container with ID starting with 78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba not found: ID does not exist" containerID="78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.971380 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} err="failed to get container status \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": rpc error: code = NotFound desc = could not find container \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": container with ID starting with 78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.971396 4771 scope.go:117] "RemoveContainer" containerID="49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.971744 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": container with ID starting with 49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47 not found: ID does not exist" containerID="49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.971769 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} err="failed to get container status \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": rpc error: code = NotFound desc = could not find container \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": container with ID starting with 49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.971782 4771 scope.go:117] "RemoveContainer" containerID="1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.972039 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": container with ID starting with 1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339 not found: ID does not exist" containerID="1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.972065 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} err="failed to get container status \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": rpc error: code = NotFound desc = could not find container \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": container with ID starting with 1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.972079 4771 scope.go:117] "RemoveContainer" containerID="03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.972391 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": container with ID starting with 03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86 not found: ID does not exist" containerID="03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.972418 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} err="failed to get container status \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": rpc error: code = NotFound desc = could not find container \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": container with ID starting with 03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.972433 4771 scope.go:117] "RemoveContainer" containerID="9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.972712 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": container with ID starting with 9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2 not found: ID does not exist" containerID="9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.972751 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} err="failed to get container status \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": rpc error: code = NotFound desc = could not find container \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": container with ID starting with 9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.972770 4771 scope.go:117] "RemoveContainer" containerID="118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.973506 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": container with ID starting with 118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42 not found: ID does not exist" containerID="118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.973604 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} err="failed to get container status \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": rpc error: code = NotFound desc = could not find container \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": container with ID starting with 118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.973626 4771 scope.go:117] "RemoveContainer" containerID="c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254" Jan 29 09:17:54 crc kubenswrapper[4771]: E0129 09:17:54.974037 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": container with ID starting with c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254 not found: ID does not exist" containerID="c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.974073 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} err="failed to get container status \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": rpc error: code = NotFound desc = could not find container \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": container with ID starting with c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.974094 4771 scope.go:117] "RemoveContainer" containerID="9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.974329 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} err="failed to get container status \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": rpc error: code = NotFound desc = could not find container \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": container with ID starting with 9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.974357 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.974832 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} err="failed to get container status \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": rpc error: code = NotFound desc = could not find container \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": container with ID starting with 363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.974857 4771 scope.go:117] "RemoveContainer" containerID="7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.975117 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} err="failed to get container status \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": rpc error: code = NotFound desc = could not find container \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": container with ID starting with 7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.975144 4771 scope.go:117] "RemoveContainer" containerID="78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.975456 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} err="failed to get container status \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": rpc error: code = NotFound desc = could not find container \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": container with ID starting with 78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.975485 4771 scope.go:117] "RemoveContainer" containerID="49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.975854 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} err="failed to get container status \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": rpc error: code = NotFound desc = could not find container \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": container with ID starting with 49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.975881 4771 scope.go:117] "RemoveContainer" containerID="1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.976154 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} err="failed to get container status \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": rpc error: code = NotFound desc = could not find container \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": container with ID starting with 1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.976184 4771 scope.go:117] "RemoveContainer" containerID="03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.976474 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} err="failed to get container status \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": rpc error: code = NotFound desc = could not find container \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": container with ID starting with 03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.976514 4771 scope.go:117] "RemoveContainer" containerID="9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.976834 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} err="failed to get container status \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": rpc error: code = NotFound desc = could not find container \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": container with ID starting with 9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.976853 4771 scope.go:117] "RemoveContainer" containerID="118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.977245 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} err="failed to get container status \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": rpc error: code = NotFound desc = could not find container \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": container with ID starting with 118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.977270 4771 scope.go:117] "RemoveContainer" containerID="c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.977566 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} err="failed to get container status \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": rpc error: code = NotFound desc = could not find container \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": container with ID starting with c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.977592 4771 scope.go:117] "RemoveContainer" containerID="9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.978141 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} err="failed to get container status \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": rpc error: code = NotFound desc = could not find container \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": container with ID starting with 9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.978170 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.978565 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} err="failed to get container status \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": rpc error: code = NotFound desc = could not find container \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": container with ID starting with 363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.978590 4771 scope.go:117] "RemoveContainer" containerID="7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.978826 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} err="failed to get container status \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": rpc error: code = NotFound desc = could not find container \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": container with ID starting with 7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.978852 4771 scope.go:117] "RemoveContainer" containerID="78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.979263 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} err="failed to get container status \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": rpc error: code = NotFound desc = could not find container \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": container with ID starting with 78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.979286 4771 scope.go:117] "RemoveContainer" containerID="49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.979554 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} err="failed to get container status \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": rpc error: code = NotFound desc = could not find container \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": container with ID starting with 49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.979584 4771 scope.go:117] "RemoveContainer" containerID="1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.979931 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} err="failed to get container status \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": rpc error: code = NotFound desc = could not find container \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": container with ID starting with 1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.979961 4771 scope.go:117] "RemoveContainer" containerID="03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.980189 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} err="failed to get container status \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": rpc error: code = NotFound desc = could not find container \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": container with ID starting with 03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.980217 4771 scope.go:117] "RemoveContainer" containerID="9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.980510 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} err="failed to get container status \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": rpc error: code = NotFound desc = could not find container \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": container with ID starting with 9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.980543 4771 scope.go:117] "RemoveContainer" containerID="118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.980892 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} err="failed to get container status \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": rpc error: code = NotFound desc = could not find container \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": container with ID starting with 118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.980920 4771 scope.go:117] "RemoveContainer" containerID="c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.981191 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} err="failed to get container status \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": rpc error: code = NotFound desc = could not find container \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": container with ID starting with c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.981214 4771 scope.go:117] "RemoveContainer" containerID="9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.981546 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} err="failed to get container status \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": rpc error: code = NotFound desc = could not find container \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": container with ID starting with 9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.981577 4771 scope.go:117] "RemoveContainer" containerID="363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.981848 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168"} err="failed to get container status \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": rpc error: code = NotFound desc = could not find container \"363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168\": container with ID starting with 363b72f90a0fd3e6723cb26f32d663f2aa1834d5477573350c7a60f316c29168 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.981874 4771 scope.go:117] "RemoveContainer" containerID="7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.982126 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705"} err="failed to get container status \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": rpc error: code = NotFound desc = could not find container \"7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705\": container with ID starting with 7e9aea966ab0beda7dfb1b94f0912a30e492e6a53d9885bcb4fbeb923314b705 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.982160 4771 scope.go:117] "RemoveContainer" containerID="78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.982371 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba"} err="failed to get container status \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": rpc error: code = NotFound desc = could not find container \"78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba\": container with ID starting with 78355caaf238f5980b87371ed552bc631aa6fd0379c2e72c442d927475dfe6ba not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.982399 4771 scope.go:117] "RemoveContainer" containerID="49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.982664 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47"} err="failed to get container status \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": rpc error: code = NotFound desc = could not find container \"49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47\": container with ID starting with 49a3154347248cfae3af9a6516ebcf0c4de316c86bf0efc00db42e85ec2aff47 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.982710 4771 scope.go:117] "RemoveContainer" containerID="1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.982993 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339"} err="failed to get container status \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": rpc error: code = NotFound desc = could not find container \"1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339\": container with ID starting with 1bb24dfae29e3b1ac3dd883ed52a22d817663155c28271ba1a478451e3cb5339 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.983020 4771 scope.go:117] "RemoveContainer" containerID="03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.983276 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86"} err="failed to get container status \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": rpc error: code = NotFound desc = could not find container \"03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86\": container with ID starting with 03fa3d0827c6433ecd8340f8c975fa71107775bb146b193b099e1fc853f04a86 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.983303 4771 scope.go:117] "RemoveContainer" containerID="9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.983532 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2"} err="failed to get container status \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": rpc error: code = NotFound desc = could not find container \"9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2\": container with ID starting with 9eec12ae05d0c9c4877640672d4427cfb71146bf833c5121d427c30466539bf2 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.983557 4771 scope.go:117] "RemoveContainer" containerID="118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.983843 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42"} err="failed to get container status \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": rpc error: code = NotFound desc = could not find container \"118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42\": container with ID starting with 118809e4eb14952d4f08b901bd0e44d425c936f70d37498ebc8bb0f39ade0c42 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.983864 4771 scope.go:117] "RemoveContainer" containerID="c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.984141 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254"} err="failed to get container status \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": rpc error: code = NotFound desc = could not find container \"c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254\": container with ID starting with c7b452da8d11f9279bec6eea724e72a3997ef86bfbf4ade11103dbea40e2e254 not found: ID does not exist" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.984164 4771 scope.go:117] "RemoveContainer" containerID="9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d" Jan 29 09:17:54 crc kubenswrapper[4771]: I0129 09:17:54.984383 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d"} err="failed to get container status \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": rpc error: code = NotFound desc = could not find container \"9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d\": container with ID starting with 9db297a128a0f775f9f7d6bd4824f4bdc810d3ab1a396d728320b59e8f4eae8d not found: ID does not exist" Jan 29 09:17:55 crc kubenswrapper[4771]: I0129 09:17:55.812505 4771 generic.go:334] "Generic (PLEG): container finished" podID="c1a549fe-2eb5-480a-91cd-a50235c5f5ae" containerID="f9dd46b7bc388e190f4c273211c91edf28945295371acabeea1dd96fef58bcb0" exitCode=0 Jan 29 09:17:55 crc kubenswrapper[4771]: I0129 09:17:55.812646 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerDied","Data":"f9dd46b7bc388e190f4c273211c91edf28945295371acabeea1dd96fef58bcb0"} Jan 29 09:17:55 crc kubenswrapper[4771]: I0129 09:17:55.812707 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"d863ef4a7d0a1beb8c8f7670ebc1daa37bb9d29007aad52879f1f9561768487e"} Jan 29 09:17:56 crc kubenswrapper[4771]: I0129 09:17:56.833478 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"5ec1981a8a9182f10966e625389ccbf509e8e5b1edeaf60c307f8036cb1ee9a4"} Jan 29 09:17:56 crc kubenswrapper[4771]: I0129 09:17:56.834265 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"82238de9e07ddb4592e4b105c6689ba73d89fbd95f69aac35be680bd1fb24fa5"} Jan 29 09:17:56 crc kubenswrapper[4771]: I0129 09:17:56.834282 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"fa9cce1beccec4ab1ec69b685dffaa280c897c6d549e5939c6e2bb949fed869d"} Jan 29 09:17:56 crc kubenswrapper[4771]: I0129 09:17:56.834292 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"cd32a8472917b68c8c6dfcdac1c9e86d8370b92458f2e755c0522d0b97b91693"} Jan 29 09:17:56 crc kubenswrapper[4771]: I0129 09:17:56.834300 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"117ee833f64aa05ae0ea5577e85696cc50ddeb46f94acf13829787489e780345"} Jan 29 09:17:56 crc kubenswrapper[4771]: I0129 09:17:56.834308 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"5dc9531a5db95aa3fd55f1cb628bd6bb7628c90d91da6e87c77c7835eb71da96"} Jan 29 09:17:56 crc kubenswrapper[4771]: I0129 09:17:56.845520 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7f16f4-439f-4743-b5f2-b9c6f6c346f5" path="/var/lib/kubelet/pods/ff7f16f4-439f-4743-b5f2-b9c6f6c346f5/volumes" Jan 29 09:17:58 crc kubenswrapper[4771]: I0129 09:17:58.848674 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"3252f7d5806f0acc87b5c8daa83c2703fdb1d689cf3d8b9884d7403ee20dae12"} Jan 29 09:17:59 crc kubenswrapper[4771]: I0129 09:17:59.789860 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-knhgs" Jan 29 09:18:01 crc kubenswrapper[4771]: I0129 09:18:01.869231 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" event={"ID":"c1a549fe-2eb5-480a-91cd-a50235c5f5ae","Type":"ContainerStarted","Data":"fb31ca7c0f1922c7df8214a1dfa56ee93e92ae42daf575b24e0adf7b726b2073"} Jan 29 09:18:01 crc kubenswrapper[4771]: I0129 09:18:01.869710 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:18:01 crc kubenswrapper[4771]: I0129 09:18:01.869727 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:18:01 crc kubenswrapper[4771]: I0129 09:18:01.896585 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:18:01 crc kubenswrapper[4771]: I0129 09:18:01.903842 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" podStartSLOduration=7.903825532 podStartE2EDuration="7.903825532s" podCreationTimestamp="2026-01-29 09:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:18:01.901864192 +0000 UTC m=+702.024704429" watchObservedRunningTime="2026-01-29 09:18:01.903825532 +0000 UTC m=+702.026665759" Jan 29 09:18:02 crc kubenswrapper[4771]: I0129 09:18:02.874496 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:18:02 crc kubenswrapper[4771]: I0129 09:18:02.903676 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:18:08 crc kubenswrapper[4771]: I0129 09:18:08.838125 4771 scope.go:117] "RemoveContainer" containerID="6993705b1fbc7657dfd5e05501f942071bb1210c0422fce04cd3c57968286297" Jan 29 09:18:08 crc kubenswrapper[4771]: E0129 09:18:08.839960 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cfc8z_openshift-multus(a46c7969-6ce3-4ba5-a1ab-73bbf487ae73)\"" pod="openshift-multus/multus-cfc8z" podUID="a46c7969-6ce3-4ba5-a1ab-73bbf487ae73" Jan 29 09:18:21 crc kubenswrapper[4771]: I0129 09:18:21.161222 4771 scope.go:117] "RemoveContainer" containerID="2bb21747e1208cad2400c342f3999fde3160f5e91fe913240d64292b91de67b3" Jan 29 09:18:21 crc kubenswrapper[4771]: I0129 09:18:21.838665 4771 scope.go:117] "RemoveContainer" containerID="6993705b1fbc7657dfd5e05501f942071bb1210c0422fce04cd3c57968286297" Jan 29 09:18:21 crc kubenswrapper[4771]: I0129 09:18:21.971078 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/2.log" Jan 29 09:18:22 crc kubenswrapper[4771]: I0129 09:18:22.980033 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cfc8z_a46c7969-6ce3-4ba5-a1ab-73bbf487ae73/kube-multus/2.log" Jan 29 09:18:22 crc kubenswrapper[4771]: I0129 09:18:22.980224 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cfc8z" event={"ID":"a46c7969-6ce3-4ba5-a1ab-73bbf487ae73","Type":"ContainerStarted","Data":"40260c39aeab85fae744b5d4fcd396280cce78fac526a1147ea546573eab736c"} Jan 29 09:18:24 crc kubenswrapper[4771]: I0129 09:18:24.971753 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ncng7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.676353 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7"] Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.678161 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.680812 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.689137 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7"] Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.787479 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.787629 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4xd\" (UniqueName: \"kubernetes.io/projected/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-kube-api-access-kf4xd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.787856 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.889385 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4xd\" (UniqueName: \"kubernetes.io/projected/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-kube-api-access-kf4xd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.889479 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.889529 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.890146 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.890169 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.918399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4xd\" (UniqueName: \"kubernetes.io/projected/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-kube-api-access-kf4xd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:42 crc kubenswrapper[4771]: I0129 09:18:42.999605 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:43 crc kubenswrapper[4771]: I0129 09:18:43.439815 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7"] Jan 29 09:18:44 crc kubenswrapper[4771]: I0129 09:18:44.145518 4771 generic.go:334] "Generic (PLEG): container finished" podID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerID="6d8480b932c5eec7cf78f93dc5ec482affb786c8c1497a2ab8e978c8f857fc63" exitCode=0 Jan 29 09:18:44 crc kubenswrapper[4771]: I0129 09:18:44.145597 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" event={"ID":"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33","Type":"ContainerDied","Data":"6d8480b932c5eec7cf78f93dc5ec482affb786c8c1497a2ab8e978c8f857fc63"} Jan 29 09:18:44 crc kubenswrapper[4771]: I0129 09:18:44.145999 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" event={"ID":"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33","Type":"ContainerStarted","Data":"555e8eb05eecbc37fb32b5aafd922c94928e66dc93c2695f686e37a11baacaa9"} Jan 29 09:18:46 crc kubenswrapper[4771]: I0129 09:18:46.160841 4771 generic.go:334] "Generic (PLEG): container finished" podID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerID="aae8b8aa187512866dc497a534bbafdd15985c8776bcf843d9eb30772ea2e1e4" exitCode=0 Jan 29 09:18:46 crc kubenswrapper[4771]: I0129 09:18:46.160964 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" event={"ID":"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33","Type":"ContainerDied","Data":"aae8b8aa187512866dc497a534bbafdd15985c8776bcf843d9eb30772ea2e1e4"} Jan 29 09:18:47 crc kubenswrapper[4771]: I0129 09:18:47.168315 4771 generic.go:334] "Generic (PLEG): container finished" podID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerID="91eb91533ea06b42bfa212098c0eac19f717bf6e2a641f28b11b33f11dff18a5" exitCode=0 Jan 29 09:18:47 crc kubenswrapper[4771]: I0129 09:18:47.168419 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" event={"ID":"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33","Type":"ContainerDied","Data":"91eb91533ea06b42bfa212098c0eac19f717bf6e2a641f28b11b33f11dff18a5"} Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.425730 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.565313 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-util\") pod \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.565383 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-bundle\") pod \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.565469 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf4xd\" (UniqueName: \"kubernetes.io/projected/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-kube-api-access-kf4xd\") pod \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\" (UID: \"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33\") " Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.566207 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-bundle" (OuterVolumeSpecName: "bundle") pod "c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" (UID: "c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.573845 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-kube-api-access-kf4xd" (OuterVolumeSpecName: "kube-api-access-kf4xd") pod "c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" (UID: "c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33"). InnerVolumeSpecName "kube-api-access-kf4xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.605173 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-util" (OuterVolumeSpecName: "util") pod "c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" (UID: "c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.667064 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf4xd\" (UniqueName: \"kubernetes.io/projected/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-kube-api-access-kf4xd\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.667153 4771 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-util\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:48 crc kubenswrapper[4771]: I0129 09:18:48.667169 4771 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:49 crc kubenswrapper[4771]: I0129 09:18:49.187080 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" event={"ID":"c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33","Type":"ContainerDied","Data":"555e8eb05eecbc37fb32b5aafd922c94928e66dc93c2695f686e37a11baacaa9"} Jan 29 09:18:49 crc kubenswrapper[4771]: I0129 09:18:49.187552 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="555e8eb05eecbc37fb32b5aafd922c94928e66dc93c2695f686e37a11baacaa9" Jan 29 09:18:49 crc kubenswrapper[4771]: I0129 09:18:49.187214 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.901621 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zq5mf"] Jan 29 09:18:50 crc kubenswrapper[4771]: E0129 09:18:50.901913 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerName="extract" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.901927 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerName="extract" Jan 29 09:18:50 crc kubenswrapper[4771]: E0129 09:18:50.901945 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerName="pull" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.901951 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerName="pull" Jan 29 09:18:50 crc kubenswrapper[4771]: E0129 09:18:50.901965 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerName="util" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.901973 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerName="util" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.902083 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33" containerName="extract" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.902543 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zq5mf" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.904987 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.905629 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.911422 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zq5mf"] Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.911802 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-58h7p" Jan 29 09:18:50 crc kubenswrapper[4771]: I0129 09:18:50.997827 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlvtq\" (UniqueName: \"kubernetes.io/projected/6a2f99d9-e297-4f09-8afd-3ab95322be73-kube-api-access-wlvtq\") pod \"nmstate-operator-646758c888-zq5mf\" (UID: \"6a2f99d9-e297-4f09-8afd-3ab95322be73\") " pod="openshift-nmstate/nmstate-operator-646758c888-zq5mf" Jan 29 09:18:51 crc kubenswrapper[4771]: I0129 09:18:51.099723 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlvtq\" (UniqueName: \"kubernetes.io/projected/6a2f99d9-e297-4f09-8afd-3ab95322be73-kube-api-access-wlvtq\") pod \"nmstate-operator-646758c888-zq5mf\" (UID: \"6a2f99d9-e297-4f09-8afd-3ab95322be73\") " pod="openshift-nmstate/nmstate-operator-646758c888-zq5mf" Jan 29 09:18:51 crc kubenswrapper[4771]: I0129 09:18:51.124323 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlvtq\" (UniqueName: \"kubernetes.io/projected/6a2f99d9-e297-4f09-8afd-3ab95322be73-kube-api-access-wlvtq\") pod \"nmstate-operator-646758c888-zq5mf\" (UID: \"6a2f99d9-e297-4f09-8afd-3ab95322be73\") " pod="openshift-nmstate/nmstate-operator-646758c888-zq5mf" Jan 29 09:18:51 crc kubenswrapper[4771]: I0129 09:18:51.222648 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zq5mf" Jan 29 09:18:51 crc kubenswrapper[4771]: I0129 09:18:51.464514 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zq5mf"] Jan 29 09:18:52 crc kubenswrapper[4771]: I0129 09:18:52.203614 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zq5mf" event={"ID":"6a2f99d9-e297-4f09-8afd-3ab95322be73","Type":"ContainerStarted","Data":"554856b17eb7c5ef5ecd60d66f012d19df91186a03b3befb0ea976801ed3dc24"} Jan 29 09:18:54 crc kubenswrapper[4771]: I0129 09:18:54.214042 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zq5mf" event={"ID":"6a2f99d9-e297-4f09-8afd-3ab95322be73","Type":"ContainerStarted","Data":"4b7985436da3d287d1b8e196a950e49b2fbee433d9f4e7b28686cafe0e0bf84b"} Jan 29 09:18:54 crc kubenswrapper[4771]: I0129 09:18:54.231410 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-zq5mf" podStartSLOduration=1.8909905710000001 podStartE2EDuration="4.231388634s" podCreationTimestamp="2026-01-29 09:18:50 +0000 UTC" firstStartedPulling="2026-01-29 09:18:51.473737774 +0000 UTC m=+751.596578001" lastFinishedPulling="2026-01-29 09:18:53.814135837 +0000 UTC m=+753.936976064" observedRunningTime="2026-01-29 09:18:54.227376419 +0000 UTC m=+754.350216666" watchObservedRunningTime="2026-01-29 09:18:54.231388634 +0000 UTC m=+754.354228861" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.174742 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dcp48"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.176770 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.179055 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c6fmv" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.181947 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.183077 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.194008 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.195014 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dcp48"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.202982 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ht6nn"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.206014 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.211083 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.270088 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-dbus-socket\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.270166 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2tt4\" (UniqueName: \"kubernetes.io/projected/8b851beb-4218-4b45-8f3a-695b6c6cd02f-kube-api-access-k2tt4\") pod \"nmstate-webhook-8474b5b9d8-dgdkp\" (UID: \"8b851beb-4218-4b45-8f3a-695b6c6cd02f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.270211 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6h5\" (UniqueName: \"kubernetes.io/projected/a3334432-223c-4661-b12b-ec8524c6439d-kube-api-access-9w6h5\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.270249 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcx67\" (UniqueName: \"kubernetes.io/projected/1755a15c-b178-498c-a5ca-077feb480beb-kube-api-access-qcx67\") pod \"nmstate-metrics-54757c584b-dcp48\" (UID: \"1755a15c-b178-498c-a5ca-077feb480beb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.270334 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8b851beb-4218-4b45-8f3a-695b6c6cd02f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dgdkp\" (UID: \"8b851beb-4218-4b45-8f3a-695b6c6cd02f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.270389 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-nmstate-lock\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.270499 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-ovs-socket\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.367871 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.368600 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.370128 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wwht6" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371241 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371672 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w6h5\" (UniqueName: \"kubernetes.io/projected/a3334432-223c-4661-b12b-ec8524c6439d-kube-api-access-9w6h5\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcx67\" (UniqueName: \"kubernetes.io/projected/1755a15c-b178-498c-a5ca-077feb480beb-kube-api-access-qcx67\") pod \"nmstate-metrics-54757c584b-dcp48\" (UID: \"1755a15c-b178-498c-a5ca-077feb480beb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371796 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8b851beb-4218-4b45-8f3a-695b6c6cd02f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dgdkp\" (UID: \"8b851beb-4218-4b45-8f3a-695b6c6cd02f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371820 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-nmstate-lock\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371828 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371865 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-ovs-socket\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: E0129 09:18:55.371897 4771 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371901 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-nmstate-lock\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371919 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-dbus-socket\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: E0129 09:18:55.371969 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b851beb-4218-4b45-8f3a-695b6c6cd02f-tls-key-pair podName:8b851beb-4218-4b45-8f3a-695b6c6cd02f nodeName:}" failed. No retries permitted until 2026-01-29 09:18:55.87193464 +0000 UTC m=+755.994774867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/8b851beb-4218-4b45-8f3a-695b6c6cd02f-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-dgdkp" (UID: "8b851beb-4218-4b45-8f3a-695b6c6cd02f") : secret "openshift-nmstate-webhook" not found Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.371972 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-ovs-socket\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.372011 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2tt4\" (UniqueName: \"kubernetes.io/projected/8b851beb-4218-4b45-8f3a-695b6c6cd02f-kube-api-access-k2tt4\") pod \"nmstate-webhook-8474b5b9d8-dgdkp\" (UID: \"8b851beb-4218-4b45-8f3a-695b6c6cd02f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.372239 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a3334432-223c-4661-b12b-ec8524c6439d-dbus-socket\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.380505 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.397618 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcx67\" (UniqueName: \"kubernetes.io/projected/1755a15c-b178-498c-a5ca-077feb480beb-kube-api-access-qcx67\") pod \"nmstate-metrics-54757c584b-dcp48\" (UID: \"1755a15c-b178-498c-a5ca-077feb480beb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.398035 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w6h5\" (UniqueName: \"kubernetes.io/projected/a3334432-223c-4661-b12b-ec8524c6439d-kube-api-access-9w6h5\") pod \"nmstate-handler-ht6nn\" (UID: \"a3334432-223c-4661-b12b-ec8524c6439d\") " pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.406886 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2tt4\" (UniqueName: \"kubernetes.io/projected/8b851beb-4218-4b45-8f3a-695b6c6cd02f-kube-api-access-k2tt4\") pod \"nmstate-webhook-8474b5b9d8-dgdkp\" (UID: \"8b851beb-4218-4b45-8f3a-695b6c6cd02f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.473387 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3164bb7-413d-4fc4-b166-62ea6f7840cd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.473526 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f3164bb7-413d-4fc4-b166-62ea6f7840cd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.473565 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncrj\" (UniqueName: \"kubernetes.io/projected/f3164bb7-413d-4fc4-b166-62ea6f7840cd-kube-api-access-wncrj\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.497497 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.534178 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.564560 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-685fb5f6fd-c8b6m"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.567867 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.579105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f3164bb7-413d-4fc4-b166-62ea6f7840cd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.579192 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wncrj\" (UniqueName: \"kubernetes.io/projected/f3164bb7-413d-4fc4-b166-62ea6f7840cd-kube-api-access-wncrj\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.579502 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3164bb7-413d-4fc4-b166-62ea6f7840cd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.582279 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f3164bb7-413d-4fc4-b166-62ea6f7840cd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.613603 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3164bb7-413d-4fc4-b166-62ea6f7840cd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.613782 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-685fb5f6fd-c8b6m"] Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.614483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wncrj\" (UniqueName: \"kubernetes.io/projected/f3164bb7-413d-4fc4-b166-62ea6f7840cd-kube-api-access-wncrj\") pod \"nmstate-console-plugin-7754f76f8b-cv75t\" (UID: \"f3164bb7-413d-4fc4-b166-62ea6f7840cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.680571 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-oauth-serving-cert\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.680634 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-trusted-ca-bundle\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.680682 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-config\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.680731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-service-ca\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.680750 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpp6g\" (UniqueName: \"kubernetes.io/projected/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-kube-api-access-cpp6g\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.680893 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-serving-cert\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.680934 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-oauth-config\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.688483 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.783160 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-service-ca\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.783609 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpp6g\" (UniqueName: \"kubernetes.io/projected/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-kube-api-access-cpp6g\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.783649 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-serving-cert\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.783687 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-oauth-config\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.783726 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-oauth-serving-cert\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.783749 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-trusted-ca-bundle\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.783788 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-config\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.784346 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-service-ca\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.785003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-oauth-serving-cert\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.786162 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-config\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.792210 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-serving-cert\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.795019 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-console-oauth-config\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.798759 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-trusted-ca-bundle\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.812215 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpp6g\" (UniqueName: \"kubernetes.io/projected/0013101c-7c9f-4fb4-9edc-81eb0cb852d5-kube-api-access-cpp6g\") pod \"console-685fb5f6fd-c8b6m\" (UID: \"0013101c-7c9f-4fb4-9edc-81eb0cb852d5\") " pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.884726 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8b851beb-4218-4b45-8f3a-695b6c6cd02f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dgdkp\" (UID: \"8b851beb-4218-4b45-8f3a-695b6c6cd02f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.890302 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8b851beb-4218-4b45-8f3a-695b6c6cd02f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dgdkp\" (UID: \"8b851beb-4218-4b45-8f3a-695b6c6cd02f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.922994 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:18:55 crc kubenswrapper[4771]: I0129 09:18:55.929470 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dcp48"] Jan 29 09:18:55 crc kubenswrapper[4771]: W0129 09:18:55.953055 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1755a15c_b178_498c_a5ca_077feb480beb.slice/crio-d46c939bf69cce3f479843ce7a42e209ab28d574c08f8f14e0cd46f0fd641439 WatchSource:0}: Error finding container d46c939bf69cce3f479843ce7a42e209ab28d574c08f8f14e0cd46f0fd641439: Status 404 returned error can't find the container with id d46c939bf69cce3f479843ce7a42e209ab28d574c08f8f14e0cd46f0fd641439 Jan 29 09:18:56 crc kubenswrapper[4771]: I0129 09:18:56.042266 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t"] Jan 29 09:18:56 crc kubenswrapper[4771]: W0129 09:18:56.048825 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3164bb7_413d_4fc4_b166_62ea6f7840cd.slice/crio-385a1943cf3b31b66b4a4decc962687078884f1a7559dd6505158754e379e4c5 WatchSource:0}: Error finding container 385a1943cf3b31b66b4a4decc962687078884f1a7559dd6505158754e379e4c5: Status 404 returned error can't find the container with id 385a1943cf3b31b66b4a4decc962687078884f1a7559dd6505158754e379e4c5 Jan 29 09:18:56 crc kubenswrapper[4771]: I0129 09:18:56.107682 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:18:56 crc kubenswrapper[4771]: I0129 09:18:56.237960 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" event={"ID":"1755a15c-b178-498c-a5ca-077feb480beb","Type":"ContainerStarted","Data":"d46c939bf69cce3f479843ce7a42e209ab28d574c08f8f14e0cd46f0fd641439"} Jan 29 09:18:56 crc kubenswrapper[4771]: I0129 09:18:56.239167 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" event={"ID":"f3164bb7-413d-4fc4-b166-62ea6f7840cd","Type":"ContainerStarted","Data":"385a1943cf3b31b66b4a4decc962687078884f1a7559dd6505158754e379e4c5"} Jan 29 09:18:56 crc kubenswrapper[4771]: I0129 09:18:56.240017 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ht6nn" event={"ID":"a3334432-223c-4661-b12b-ec8524c6439d","Type":"ContainerStarted","Data":"22e6353200e4b0e892fec3d0429e53e6d4e1958322a8a09f9976b6f0a27d335f"} Jan 29 09:18:56 crc kubenswrapper[4771]: I0129 09:18:56.301895 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp"] Jan 29 09:18:56 crc kubenswrapper[4771]: I0129 09:18:56.348915 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-685fb5f6fd-c8b6m"] Jan 29 09:18:56 crc kubenswrapper[4771]: W0129 09:18:56.353371 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0013101c_7c9f_4fb4_9edc_81eb0cb852d5.slice/crio-ff4e1015686199c7137f098d782622642511ed03aa70a7315e3e060ba2fa3151 WatchSource:0}: Error finding container ff4e1015686199c7137f098d782622642511ed03aa70a7315e3e060ba2fa3151: Status 404 returned error can't find the container with id ff4e1015686199c7137f098d782622642511ed03aa70a7315e3e060ba2fa3151 Jan 29 09:18:57 crc kubenswrapper[4771]: I0129 09:18:57.270180 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" event={"ID":"8b851beb-4218-4b45-8f3a-695b6c6cd02f","Type":"ContainerStarted","Data":"13d43efbc9e5ed0f2185989032387b2d1044bcce6472bc828b6d15cc80e36b83"} Jan 29 09:18:57 crc kubenswrapper[4771]: I0129 09:18:57.272360 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-685fb5f6fd-c8b6m" event={"ID":"0013101c-7c9f-4fb4-9edc-81eb0cb852d5","Type":"ContainerStarted","Data":"333a38d10558ad31a48f96cb71361b10975134bf55440abf8591c0b722914d3b"} Jan 29 09:18:57 crc kubenswrapper[4771]: I0129 09:18:57.272403 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-685fb5f6fd-c8b6m" event={"ID":"0013101c-7c9f-4fb4-9edc-81eb0cb852d5","Type":"ContainerStarted","Data":"ff4e1015686199c7137f098d782622642511ed03aa70a7315e3e060ba2fa3151"} Jan 29 09:18:57 crc kubenswrapper[4771]: I0129 09:18:57.301441 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-685fb5f6fd-c8b6m" podStartSLOduration=2.301414303 podStartE2EDuration="2.301414303s" podCreationTimestamp="2026-01-29 09:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:18:57.289751197 +0000 UTC m=+757.412591424" watchObservedRunningTime="2026-01-29 09:18:57.301414303 +0000 UTC m=+757.424254530" Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.305414 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" event={"ID":"8b851beb-4218-4b45-8f3a-695b6c6cd02f","Type":"ContainerStarted","Data":"6f861ff1d82c8d12905f3300fec385cb1f8ef564a8c378744dfaff415bd61132"} Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.306307 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.309040 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" event={"ID":"1755a15c-b178-498c-a5ca-077feb480beb","Type":"ContainerStarted","Data":"96949bbce251b785be309c3df2f23733a5cf52dcd8a88618d5bdaae60ce7dcc3"} Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.311051 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" event={"ID":"f3164bb7-413d-4fc4-b166-62ea6f7840cd","Type":"ContainerStarted","Data":"f5ddc1a3567946f7c9df1c414fe92bf2bfcc322fef529a04fe9e3a80a7f8fb7f"} Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.313266 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ht6nn" event={"ID":"a3334432-223c-4661-b12b-ec8524c6439d","Type":"ContainerStarted","Data":"bd6a4230b4ac61220a1a16279855a12d5581485ed8dc95e6a6fadbc288b08cc7"} Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.313432 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.329780 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" podStartSLOduration=2.416879982 podStartE2EDuration="5.329757516s" podCreationTimestamp="2026-01-29 09:18:55 +0000 UTC" firstStartedPulling="2026-01-29 09:18:56.311584722 +0000 UTC m=+756.434424949" lastFinishedPulling="2026-01-29 09:18:59.224462256 +0000 UTC m=+759.347302483" observedRunningTime="2026-01-29 09:19:00.32421747 +0000 UTC m=+760.447057717" watchObservedRunningTime="2026-01-29 09:19:00.329757516 +0000 UTC m=+760.452597753" Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.347152 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ht6nn" podStartSLOduration=1.711106413 podStartE2EDuration="5.347124923s" podCreationTimestamp="2026-01-29 09:18:55 +0000 UTC" firstStartedPulling="2026-01-29 09:18:55.61444441 +0000 UTC m=+755.737284637" lastFinishedPulling="2026-01-29 09:18:59.25046292 +0000 UTC m=+759.373303147" observedRunningTime="2026-01-29 09:19:00.344830182 +0000 UTC m=+760.467670419" watchObservedRunningTime="2026-01-29 09:19:00.347124923 +0000 UTC m=+760.469965150" Jan 29 09:19:00 crc kubenswrapper[4771]: I0129 09:19:00.366119 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cv75t" podStartSLOduration=2.193293149 podStartE2EDuration="5.366094562s" podCreationTimestamp="2026-01-29 09:18:55 +0000 UTC" firstStartedPulling="2026-01-29 09:18:56.051124789 +0000 UTC m=+756.173965016" lastFinishedPulling="2026-01-29 09:18:59.223926192 +0000 UTC m=+759.346766429" observedRunningTime="2026-01-29 09:19:00.364262644 +0000 UTC m=+760.487102871" watchObservedRunningTime="2026-01-29 09:19:00.366094562 +0000 UTC m=+760.488934799" Jan 29 09:19:02 crc kubenswrapper[4771]: I0129 09:19:02.327777 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" event={"ID":"1755a15c-b178-498c-a5ca-077feb480beb","Type":"ContainerStarted","Data":"74104abdbffbbc63249023a5acb9c9bee2c7fda3938e5bf1292588b3b54a7e1b"} Jan 29 09:19:02 crc kubenswrapper[4771]: I0129 09:19:02.349182 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-dcp48" podStartSLOduration=1.727694909 podStartE2EDuration="7.349148694s" podCreationTimestamp="2026-01-29 09:18:55 +0000 UTC" firstStartedPulling="2026-01-29 09:18:55.966082262 +0000 UTC m=+756.088922489" lastFinishedPulling="2026-01-29 09:19:01.587536047 +0000 UTC m=+761.710376274" observedRunningTime="2026-01-29 09:19:02.348526267 +0000 UTC m=+762.471366504" watchObservedRunningTime="2026-01-29 09:19:02.349148694 +0000 UTC m=+762.471988921" Jan 29 09:19:05 crc kubenswrapper[4771]: I0129 09:19:05.556614 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ht6nn" Jan 29 09:19:05 crc kubenswrapper[4771]: I0129 09:19:05.924559 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:19:05 crc kubenswrapper[4771]: I0129 09:19:05.925124 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:19:05 crc kubenswrapper[4771]: I0129 09:19:05.931887 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:19:06 crc kubenswrapper[4771]: I0129 09:19:06.367067 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-685fb5f6fd-c8b6m" Jan 29 09:19:06 crc kubenswrapper[4771]: I0129 09:19:06.434256 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jzc5h"] Jan 29 09:19:13 crc kubenswrapper[4771]: I0129 09:19:13.728167 4771 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 09:19:14 crc kubenswrapper[4771]: I0129 09:19:14.271138 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:19:14 crc kubenswrapper[4771]: I0129 09:19:14.271629 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:19:16 crc kubenswrapper[4771]: I0129 09:19:16.114713 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dgdkp" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.686605 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr"] Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.688504 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.690813 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.700182 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr"] Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.784689 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6bz7\" (UniqueName: \"kubernetes.io/projected/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-kube-api-access-q6bz7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.784817 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.784933 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.886727 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.886774 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.886846 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6bz7\" (UniqueName: \"kubernetes.io/projected/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-kube-api-access-q6bz7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.887389 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.887964 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:29 crc kubenswrapper[4771]: I0129 09:19:29.909857 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6bz7\" (UniqueName: \"kubernetes.io/projected/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-kube-api-access-q6bz7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:30 crc kubenswrapper[4771]: I0129 09:19:30.006667 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:30 crc kubenswrapper[4771]: I0129 09:19:30.393484 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr"] Jan 29 09:19:31 crc kubenswrapper[4771]: I0129 09:19:31.326038 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" event={"ID":"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0","Type":"ContainerStarted","Data":"d1db37a54c7ac8424a838fb1065ed51f288c3412a8eed2df89fbab3607305ba7"} Jan 29 09:19:31 crc kubenswrapper[4771]: I0129 09:19:31.326525 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" event={"ID":"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0","Type":"ContainerStarted","Data":"354fe09700d4915695b858e1ceb9cc274097abda7ecb52b6ced2ccf1fab75be0"} Jan 29 09:19:31 crc kubenswrapper[4771]: I0129 09:19:31.480277 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jzc5h" podUID="2fd142c7-125b-41ad-a645-c1eac4caa96b" containerName="console" containerID="cri-o://2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7" gracePeriod=15 Jan 29 09:19:31 crc kubenswrapper[4771]: I0129 09:19:31.849820 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jzc5h_2fd142c7-125b-41ad-a645-c1eac4caa96b/console/0.log" Jan 29 09:19:31 crc kubenswrapper[4771]: I0129 09:19:31.850309 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.018845 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zf7ln"] Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.018974 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-trusted-ca-bundle\") pod \"2fd142c7-125b-41ad-a645-c1eac4caa96b\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.019034 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-service-ca\") pod \"2fd142c7-125b-41ad-a645-c1eac4caa96b\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.019069 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-oauth-config\") pod \"2fd142c7-125b-41ad-a645-c1eac4caa96b\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.019130 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-serving-cert\") pod \"2fd142c7-125b-41ad-a645-c1eac4caa96b\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.019176 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-config\") pod \"2fd142c7-125b-41ad-a645-c1eac4caa96b\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.019219 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfnrg\" (UniqueName: \"kubernetes.io/projected/2fd142c7-125b-41ad-a645-c1eac4caa96b-kube-api-access-mfnrg\") pod \"2fd142c7-125b-41ad-a645-c1eac4caa96b\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " Jan 29 09:19:32 crc kubenswrapper[4771]: E0129 09:19:32.019234 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd142c7-125b-41ad-a645-c1eac4caa96b" containerName="console" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.019249 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd142c7-125b-41ad-a645-c1eac4caa96b" containerName="console" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.019276 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-oauth-serving-cert\") pod \"2fd142c7-125b-41ad-a645-c1eac4caa96b\" (UID: \"2fd142c7-125b-41ad-a645-c1eac4caa96b\") " Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.019391 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd142c7-125b-41ad-a645-c1eac4caa96b" containerName="console" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.020220 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-service-ca" (OuterVolumeSpecName: "service-ca") pod "2fd142c7-125b-41ad-a645-c1eac4caa96b" (UID: "2fd142c7-125b-41ad-a645-c1eac4caa96b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.020262 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2fd142c7-125b-41ad-a645-c1eac4caa96b" (UID: "2fd142c7-125b-41ad-a645-c1eac4caa96b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.020246 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2fd142c7-125b-41ad-a645-c1eac4caa96b" (UID: "2fd142c7-125b-41ad-a645-c1eac4caa96b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.020311 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.020793 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-config" (OuterVolumeSpecName: "console-config") pod "2fd142c7-125b-41ad-a645-c1eac4caa96b" (UID: "2fd142c7-125b-41ad-a645-c1eac4caa96b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.026604 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2fd142c7-125b-41ad-a645-c1eac4caa96b" (UID: "2fd142c7-125b-41ad-a645-c1eac4caa96b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.027650 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd142c7-125b-41ad-a645-c1eac4caa96b-kube-api-access-mfnrg" (OuterVolumeSpecName: "kube-api-access-mfnrg") pod "2fd142c7-125b-41ad-a645-c1eac4caa96b" (UID: "2fd142c7-125b-41ad-a645-c1eac4caa96b"). InnerVolumeSpecName "kube-api-access-mfnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.031867 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2fd142c7-125b-41ad-a645-c1eac4caa96b" (UID: "2fd142c7-125b-41ad-a645-c1eac4caa96b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.039571 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zf7ln"] Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120561 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-utilities\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120679 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-catalog-content\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120765 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbmpr\" (UniqueName: \"kubernetes.io/projected/076c0374-5fbf-4964-8653-9d954e66ba70-kube-api-access-sbmpr\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120855 4771 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120870 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120882 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120892 4771 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120905 4771 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120915 4771 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2fd142c7-125b-41ad-a645-c1eac4caa96b-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.120925 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfnrg\" (UniqueName: \"kubernetes.io/projected/2fd142c7-125b-41ad-a645-c1eac4caa96b-kube-api-access-mfnrg\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.222309 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-utilities\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.222401 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-catalog-content\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.222443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbmpr\" (UniqueName: \"kubernetes.io/projected/076c0374-5fbf-4964-8653-9d954e66ba70-kube-api-access-sbmpr\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.223076 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-utilities\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.223172 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-catalog-content\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.247105 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbmpr\" (UniqueName: \"kubernetes.io/projected/076c0374-5fbf-4964-8653-9d954e66ba70-kube-api-access-sbmpr\") pod \"redhat-operators-zf7ln\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.335500 4771 generic.go:334] "Generic (PLEG): container finished" podID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerID="d1db37a54c7ac8424a838fb1065ed51f288c3412a8eed2df89fbab3607305ba7" exitCode=0 Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.335594 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" event={"ID":"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0","Type":"ContainerDied","Data":"d1db37a54c7ac8424a838fb1065ed51f288c3412a8eed2df89fbab3607305ba7"} Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.338591 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jzc5h_2fd142c7-125b-41ad-a645-c1eac4caa96b/console/0.log" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.338658 4771 generic.go:334] "Generic (PLEG): container finished" podID="2fd142c7-125b-41ad-a645-c1eac4caa96b" containerID="2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7" exitCode=2 Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.338720 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jzc5h" event={"ID":"2fd142c7-125b-41ad-a645-c1eac4caa96b","Type":"ContainerDied","Data":"2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7"} Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.338772 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jzc5h" event={"ID":"2fd142c7-125b-41ad-a645-c1eac4caa96b","Type":"ContainerDied","Data":"90fc2f0b2324eb31fad1021beab8e30f77b7046e97d0d4d098b4a81eda928290"} Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.338773 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jzc5h" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.338799 4771 scope.go:117] "RemoveContainer" containerID="2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.361682 4771 scope.go:117] "RemoveContainer" containerID="2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7" Jan 29 09:19:32 crc kubenswrapper[4771]: E0129 09:19:32.362418 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7\": container with ID starting with 2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7 not found: ID does not exist" containerID="2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.362455 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7"} err="failed to get container status \"2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7\": rpc error: code = NotFound desc = could not find container \"2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7\": container with ID starting with 2225d7ad524e77e471d9913424d82465eec0e6f2f5cb8ae909705f762601ccf7 not found: ID does not exist" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.365967 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.378143 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jzc5h"] Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.385005 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jzc5h"] Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.644581 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zf7ln"] Jan 29 09:19:32 crc kubenswrapper[4771]: W0129 09:19:32.651649 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod076c0374_5fbf_4964_8653_9d954e66ba70.slice/crio-151955bb325626f29b27dae3f0c48a8bc79e77e4465c4d2ad1416d73e36d1869 WatchSource:0}: Error finding container 151955bb325626f29b27dae3f0c48a8bc79e77e4465c4d2ad1416d73e36d1869: Status 404 returned error can't find the container with id 151955bb325626f29b27dae3f0c48a8bc79e77e4465c4d2ad1416d73e36d1869 Jan 29 09:19:32 crc kubenswrapper[4771]: I0129 09:19:32.845042 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd142c7-125b-41ad-a645-c1eac4caa96b" path="/var/lib/kubelet/pods/2fd142c7-125b-41ad-a645-c1eac4caa96b/volumes" Jan 29 09:19:33 crc kubenswrapper[4771]: I0129 09:19:33.347158 4771 generic.go:334] "Generic (PLEG): container finished" podID="076c0374-5fbf-4964-8653-9d954e66ba70" containerID="60d25b33155a407f7c09a1b4fec9f2e34fa6fb258d141436d40ad1db3fb203bd" exitCode=0 Jan 29 09:19:33 crc kubenswrapper[4771]: I0129 09:19:33.347342 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf7ln" event={"ID":"076c0374-5fbf-4964-8653-9d954e66ba70","Type":"ContainerDied","Data":"60d25b33155a407f7c09a1b4fec9f2e34fa6fb258d141436d40ad1db3fb203bd"} Jan 29 09:19:33 crc kubenswrapper[4771]: I0129 09:19:33.347600 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf7ln" event={"ID":"076c0374-5fbf-4964-8653-9d954e66ba70","Type":"ContainerStarted","Data":"151955bb325626f29b27dae3f0c48a8bc79e77e4465c4d2ad1416d73e36d1869"} Jan 29 09:19:35 crc kubenswrapper[4771]: I0129 09:19:35.368956 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf7ln" event={"ID":"076c0374-5fbf-4964-8653-9d954e66ba70","Type":"ContainerStarted","Data":"0ac6c09def882f09b92e24349936a2cf863fc333d294cfa72b3367b4022a01fe"} Jan 29 09:19:35 crc kubenswrapper[4771]: I0129 09:19:35.372564 4771 generic.go:334] "Generic (PLEG): container finished" podID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerID="2e737f72208658e0d83fb871f11365efeea9f6f1329087848acdac4aae1e57f8" exitCode=0 Jan 29 09:19:35 crc kubenswrapper[4771]: I0129 09:19:35.372643 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" event={"ID":"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0","Type":"ContainerDied","Data":"2e737f72208658e0d83fb871f11365efeea9f6f1329087848acdac4aae1e57f8"} Jan 29 09:19:36 crc kubenswrapper[4771]: I0129 09:19:36.380363 4771 generic.go:334] "Generic (PLEG): container finished" podID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerID="cfa93c755efcb2524dea4374e581fe02287c8d77c56a1493d53ae432b85bb2da" exitCode=0 Jan 29 09:19:36 crc kubenswrapper[4771]: I0129 09:19:36.380452 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" event={"ID":"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0","Type":"ContainerDied","Data":"cfa93c755efcb2524dea4374e581fe02287c8d77c56a1493d53ae432b85bb2da"} Jan 29 09:19:36 crc kubenswrapper[4771]: I0129 09:19:36.382294 4771 generic.go:334] "Generic (PLEG): container finished" podID="076c0374-5fbf-4964-8653-9d954e66ba70" containerID="0ac6c09def882f09b92e24349936a2cf863fc333d294cfa72b3367b4022a01fe" exitCode=0 Jan 29 09:19:36 crc kubenswrapper[4771]: I0129 09:19:36.382343 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf7ln" event={"ID":"076c0374-5fbf-4964-8653-9d954e66ba70","Type":"ContainerDied","Data":"0ac6c09def882f09b92e24349936a2cf863fc333d294cfa72b3367b4022a01fe"} Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.392317 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf7ln" event={"ID":"076c0374-5fbf-4964-8653-9d954e66ba70","Type":"ContainerStarted","Data":"627423b94f0414c5228479ae95ab137aeec7ca11a3bc210dd3623d256fa9f9f2"} Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.698496 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.702725 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-bundle\") pod \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.703729 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-bundle" (OuterVolumeSpecName: "bundle") pod "38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" (UID: "38d92cc8-2fa5-4a7b-8e90-214597eb9fc0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.804001 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6bz7\" (UniqueName: \"kubernetes.io/projected/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-kube-api-access-q6bz7\") pod \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.804089 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-util\") pod \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\" (UID: \"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0\") " Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.804505 4771 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.812904 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-kube-api-access-q6bz7" (OuterVolumeSpecName: "kube-api-access-q6bz7") pod "38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" (UID: "38d92cc8-2fa5-4a7b-8e90-214597eb9fc0"). InnerVolumeSpecName "kube-api-access-q6bz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.816227 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-util" (OuterVolumeSpecName: "util") pod "38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" (UID: "38d92cc8-2fa5-4a7b-8e90-214597eb9fc0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.906491 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6bz7\" (UniqueName: \"kubernetes.io/projected/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-kube-api-access-q6bz7\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:37 crc kubenswrapper[4771]: I0129 09:19:37.906551 4771 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38d92cc8-2fa5-4a7b-8e90-214597eb9fc0-util\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:38 crc kubenswrapper[4771]: I0129 09:19:38.402088 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" Jan 29 09:19:38 crc kubenswrapper[4771]: I0129 09:19:38.402099 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr" event={"ID":"38d92cc8-2fa5-4a7b-8e90-214597eb9fc0","Type":"ContainerDied","Data":"354fe09700d4915695b858e1ceb9cc274097abda7ecb52b6ced2ccf1fab75be0"} Jan 29 09:19:38 crc kubenswrapper[4771]: I0129 09:19:38.402302 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354fe09700d4915695b858e1ceb9cc274097abda7ecb52b6ced2ccf1fab75be0" Jan 29 09:19:38 crc kubenswrapper[4771]: I0129 09:19:38.421503 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zf7ln" podStartSLOduration=2.626089384 podStartE2EDuration="6.421481795s" podCreationTimestamp="2026-01-29 09:19:32 +0000 UTC" firstStartedPulling="2026-01-29 09:19:33.349864746 +0000 UTC m=+793.472704973" lastFinishedPulling="2026-01-29 09:19:37.145257157 +0000 UTC m=+797.268097384" observedRunningTime="2026-01-29 09:19:38.420054097 +0000 UTC m=+798.542894334" watchObservedRunningTime="2026-01-29 09:19:38.421481795 +0000 UTC m=+798.544322022" Jan 29 09:19:42 crc kubenswrapper[4771]: I0129 09:19:42.366772 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:42 crc kubenswrapper[4771]: I0129 09:19:42.367530 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:43 crc kubenswrapper[4771]: I0129 09:19:43.404810 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zf7ln" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="registry-server" probeResult="failure" output=< Jan 29 09:19:43 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:19:43 crc kubenswrapper[4771]: > Jan 29 09:19:44 crc kubenswrapper[4771]: I0129 09:19:44.271435 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:19:44 crc kubenswrapper[4771]: I0129 09:19:44.271498 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.823663 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb"] Jan 29 09:19:47 crc kubenswrapper[4771]: E0129 09:19:47.824389 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerName="extract" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.824405 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerName="extract" Jan 29 09:19:47 crc kubenswrapper[4771]: E0129 09:19:47.824417 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerName="pull" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.824423 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerName="pull" Jan 29 09:19:47 crc kubenswrapper[4771]: E0129 09:19:47.824451 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerName="util" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.824457 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerName="util" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.824557 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d92cc8-2fa5-4a7b-8e90-214597eb9fc0" containerName="extract" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.825155 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.827362 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.827654 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-t4q9n" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.827769 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.827797 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.828154 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.843709 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb"] Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.940631 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k99zh\" (UniqueName: \"kubernetes.io/projected/a0a8dfb7-3f50-4649-ade4-04de19016aaf-kube-api-access-k99zh\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.940856 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0a8dfb7-3f50-4649-ade4-04de19016aaf-webhook-cert\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:47 crc kubenswrapper[4771]: I0129 09:19:47.941268 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0a8dfb7-3f50-4649-ade4-04de19016aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.042604 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k99zh\" (UniqueName: \"kubernetes.io/projected/a0a8dfb7-3f50-4649-ade4-04de19016aaf-kube-api-access-k99zh\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.042715 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0a8dfb7-3f50-4649-ade4-04de19016aaf-webhook-cert\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.042745 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0a8dfb7-3f50-4649-ade4-04de19016aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.061775 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0a8dfb7-3f50-4649-ade4-04de19016aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.062604 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k99zh\" (UniqueName: \"kubernetes.io/projected/a0a8dfb7-3f50-4649-ade4-04de19016aaf-kube-api-access-k99zh\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.063316 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0a8dfb7-3f50-4649-ade4-04de19016aaf-webhook-cert\") pod \"metallb-operator-controller-manager-86b88966b-ts5vb\" (UID: \"a0a8dfb7-3f50-4649-ade4-04de19016aaf\") " pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.144861 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.365869 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz"] Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.367619 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.390270 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.391192 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-szxhf" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.400118 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.471078 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz"] Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.561860 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzztk\" (UniqueName: \"kubernetes.io/projected/760f2b5f-d6d9-4bae-bb03-02c91232b71d-kube-api-access-bzztk\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.561901 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/760f2b5f-d6d9-4bae-bb03-02c91232b71d-apiservice-cert\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.561922 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/760f2b5f-d6d9-4bae-bb03-02c91232b71d-webhook-cert\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.663171 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzztk\" (UniqueName: \"kubernetes.io/projected/760f2b5f-d6d9-4bae-bb03-02c91232b71d-kube-api-access-bzztk\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.663231 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/760f2b5f-d6d9-4bae-bb03-02c91232b71d-apiservice-cert\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.663258 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/760f2b5f-d6d9-4bae-bb03-02c91232b71d-webhook-cert\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.665016 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb"] Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.671951 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/760f2b5f-d6d9-4bae-bb03-02c91232b71d-apiservice-cert\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.679967 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/760f2b5f-d6d9-4bae-bb03-02c91232b71d-webhook-cert\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.702930 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzztk\" (UniqueName: \"kubernetes.io/projected/760f2b5f-d6d9-4bae-bb03-02c91232b71d-kube-api-access-bzztk\") pod \"metallb-operator-webhook-server-88c44cd79-5zvsz\" (UID: \"760f2b5f-d6d9-4bae-bb03-02c91232b71d\") " pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:48 crc kubenswrapper[4771]: I0129 09:19:48.732410 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:49 crc kubenswrapper[4771]: I0129 09:19:49.188937 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz"] Jan 29 09:19:49 crc kubenswrapper[4771]: W0129 09:19:49.193939 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod760f2b5f_d6d9_4bae_bb03_02c91232b71d.slice/crio-d737a4fa50d27e0472fd244e44d66e0070e07feb9f68b12289cf1a167cc1dead WatchSource:0}: Error finding container d737a4fa50d27e0472fd244e44d66e0070e07feb9f68b12289cf1a167cc1dead: Status 404 returned error can't find the container with id d737a4fa50d27e0472fd244e44d66e0070e07feb9f68b12289cf1a167cc1dead Jan 29 09:19:49 crc kubenswrapper[4771]: I0129 09:19:49.492990 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" event={"ID":"760f2b5f-d6d9-4bae-bb03-02c91232b71d","Type":"ContainerStarted","Data":"d737a4fa50d27e0472fd244e44d66e0070e07feb9f68b12289cf1a167cc1dead"} Jan 29 09:19:49 crc kubenswrapper[4771]: I0129 09:19:49.494412 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" event={"ID":"a0a8dfb7-3f50-4649-ade4-04de19016aaf","Type":"ContainerStarted","Data":"650dc215827fa5ca9550450793e5c9a150c85fa139365bcd0153b4f1a5f7804d"} Jan 29 09:19:52 crc kubenswrapper[4771]: I0129 09:19:52.470758 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:52 crc kubenswrapper[4771]: I0129 09:19:52.556485 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:53 crc kubenswrapper[4771]: I0129 09:19:53.534975 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" event={"ID":"a0a8dfb7-3f50-4649-ade4-04de19016aaf","Type":"ContainerStarted","Data":"c94d7b8be16cc28621d3e93cc6232b3ffebb51c148c321fd693416e11f3beca3"} Jan 29 09:19:53 crc kubenswrapper[4771]: I0129 09:19:53.535432 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:19:53 crc kubenswrapper[4771]: I0129 09:19:53.607622 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" podStartSLOduration=2.396919395 podStartE2EDuration="6.607604249s" podCreationTimestamp="2026-01-29 09:19:47 +0000 UTC" firstStartedPulling="2026-01-29 09:19:48.685111579 +0000 UTC m=+808.807951806" lastFinishedPulling="2026-01-29 09:19:52.895796433 +0000 UTC m=+813.018636660" observedRunningTime="2026-01-29 09:19:53.560064914 +0000 UTC m=+813.682905141" watchObservedRunningTime="2026-01-29 09:19:53.607604249 +0000 UTC m=+813.730444466" Jan 29 09:19:53 crc kubenswrapper[4771]: I0129 09:19:53.609290 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zf7ln"] Jan 29 09:19:53 crc kubenswrapper[4771]: I0129 09:19:53.609614 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zf7ln" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="registry-server" containerID="cri-o://627423b94f0414c5228479ae95ab137aeec7ca11a3bc210dd3623d256fa9f9f2" gracePeriod=2 Jan 29 09:19:54 crc kubenswrapper[4771]: I0129 09:19:54.544732 4771 generic.go:334] "Generic (PLEG): container finished" podID="076c0374-5fbf-4964-8653-9d954e66ba70" containerID="627423b94f0414c5228479ae95ab137aeec7ca11a3bc210dd3623d256fa9f9f2" exitCode=0 Jan 29 09:19:54 crc kubenswrapper[4771]: I0129 09:19:54.544931 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf7ln" event={"ID":"076c0374-5fbf-4964-8653-9d954e66ba70","Type":"ContainerDied","Data":"627423b94f0414c5228479ae95ab137aeec7ca11a3bc210dd3623d256fa9f9f2"} Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.447634 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.554392 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf7ln" event={"ID":"076c0374-5fbf-4964-8653-9d954e66ba70","Type":"ContainerDied","Data":"151955bb325626f29b27dae3f0c48a8bc79e77e4465c4d2ad1416d73e36d1869"} Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.554444 4771 scope.go:117] "RemoveContainer" containerID="627423b94f0414c5228479ae95ab137aeec7ca11a3bc210dd3623d256fa9f9f2" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.554494 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf7ln" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.580953 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-utilities\") pod \"076c0374-5fbf-4964-8653-9d954e66ba70\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.581003 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-catalog-content\") pod \"076c0374-5fbf-4964-8653-9d954e66ba70\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.581045 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbmpr\" (UniqueName: \"kubernetes.io/projected/076c0374-5fbf-4964-8653-9d954e66ba70-kube-api-access-sbmpr\") pod \"076c0374-5fbf-4964-8653-9d954e66ba70\" (UID: \"076c0374-5fbf-4964-8653-9d954e66ba70\") " Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.583477 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-utilities" (OuterVolumeSpecName: "utilities") pod "076c0374-5fbf-4964-8653-9d954e66ba70" (UID: "076c0374-5fbf-4964-8653-9d954e66ba70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.594078 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/076c0374-5fbf-4964-8653-9d954e66ba70-kube-api-access-sbmpr" (OuterVolumeSpecName: "kube-api-access-sbmpr") pod "076c0374-5fbf-4964-8653-9d954e66ba70" (UID: "076c0374-5fbf-4964-8653-9d954e66ba70"). InnerVolumeSpecName "kube-api-access-sbmpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.596997 4771 scope.go:117] "RemoveContainer" containerID="0ac6c09def882f09b92e24349936a2cf863fc333d294cfa72b3367b4022a01fe" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.628413 4771 scope.go:117] "RemoveContainer" containerID="60d25b33155a407f7c09a1b4fec9f2e34fa6fb258d141436d40ad1db3fb203bd" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.682321 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.682844 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbmpr\" (UniqueName: \"kubernetes.io/projected/076c0374-5fbf-4964-8653-9d954e66ba70-kube-api-access-sbmpr\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.706038 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "076c0374-5fbf-4964-8653-9d954e66ba70" (UID: "076c0374-5fbf-4964-8653-9d954e66ba70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.783494 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/076c0374-5fbf-4964-8653-9d954e66ba70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.881959 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zf7ln"] Jan 29 09:19:55 crc kubenswrapper[4771]: I0129 09:19:55.904145 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zf7ln"] Jan 29 09:19:56 crc kubenswrapper[4771]: I0129 09:19:56.561940 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" event={"ID":"760f2b5f-d6d9-4bae-bb03-02c91232b71d","Type":"ContainerStarted","Data":"3236c336d472bee8c911a0699785ff00fc276c0301b91d9219da0d2664937185"} Jan 29 09:19:56 crc kubenswrapper[4771]: I0129 09:19:56.563854 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:19:56 crc kubenswrapper[4771]: I0129 09:19:56.587043 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" podStartSLOduration=2.354569304 podStartE2EDuration="8.587024221s" podCreationTimestamp="2026-01-29 09:19:48 +0000 UTC" firstStartedPulling="2026-01-29 09:19:49.197859956 +0000 UTC m=+809.320700173" lastFinishedPulling="2026-01-29 09:19:55.430314863 +0000 UTC m=+815.553155090" observedRunningTime="2026-01-29 09:19:56.584315119 +0000 UTC m=+816.707155346" watchObservedRunningTime="2026-01-29 09:19:56.587024221 +0000 UTC m=+816.709864448" Jan 29 09:19:56 crc kubenswrapper[4771]: I0129 09:19:56.847281 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" path="/var/lib/kubelet/pods/076c0374-5fbf-4964-8653-9d954e66ba70/volumes" Jan 29 09:20:08 crc kubenswrapper[4771]: I0129 09:20:08.744382 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-88c44cd79-5zvsz" Jan 29 09:20:14 crc kubenswrapper[4771]: I0129 09:20:14.271799 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:20:14 crc kubenswrapper[4771]: I0129 09:20:14.272194 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:20:14 crc kubenswrapper[4771]: I0129 09:20:14.272253 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:20:14 crc kubenswrapper[4771]: I0129 09:20:14.272785 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03920492f3ef5aedc2a41c61dc5f9a95c03384d306c3153f2e8b409334342291"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:20:14 crc kubenswrapper[4771]: I0129 09:20:14.272833 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://03920492f3ef5aedc2a41c61dc5f9a95c03384d306c3153f2e8b409334342291" gracePeriod=600 Jan 29 09:20:15 crc kubenswrapper[4771]: I0129 09:20:15.689986 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="03920492f3ef5aedc2a41c61dc5f9a95c03384d306c3153f2e8b409334342291" exitCode=0 Jan 29 09:20:15 crc kubenswrapper[4771]: I0129 09:20:15.690058 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"03920492f3ef5aedc2a41c61dc5f9a95c03384d306c3153f2e8b409334342291"} Jan 29 09:20:15 crc kubenswrapper[4771]: I0129 09:20:15.690498 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"20dd9c59445370fa21c3eb4e7a8f0add609e4bdb4bf039a1de172564e5cc26d6"} Jan 29 09:20:15 crc kubenswrapper[4771]: I0129 09:20:15.690527 4771 scope.go:117] "RemoveContainer" containerID="4424deee2a2b6ea98b4730414a111f836bf44a867c8b22ec3e4343e7aa010238" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.147370 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-86b88966b-ts5vb" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.853925 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-f4cw8"] Jan 29 09:20:28 crc kubenswrapper[4771]: E0129 09:20:28.854834 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="extract-utilities" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.854865 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="extract-utilities" Jan 29 09:20:28 crc kubenswrapper[4771]: E0129 09:20:28.854899 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="extract-content" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.854912 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="extract-content" Jan 29 09:20:28 crc kubenswrapper[4771]: E0129 09:20:28.854929 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="registry-server" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.854937 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="registry-server" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.855117 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="076c0374-5fbf-4964-8653-9d954e66ba70" containerName="registry-server" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.857626 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.860656 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.860941 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gq86t" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.861760 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.885384 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-reloader\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.885437 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-conf\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.885565 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-metrics\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.885622 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkf9c\" (UniqueName: \"kubernetes.io/projected/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-kube-api-access-nkf9c\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.885654 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-sockets\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.885722 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-metrics-certs\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.885741 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-startup\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.886116 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7"] Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.887210 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.889090 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.891752 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7"] Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.981140 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-44br2"] Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.982187 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-44br2" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.986761 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.986813 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-metrics-certs\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.986835 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c791596-99d9-4d8f-ba02-c4b866a007a4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6sfz7\" (UID: \"5c791596-99d9-4d8f-ba02-c4b866a007a4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.986910 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-reloader\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.986939 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-conf\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.986974 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8l8\" (UniqueName: \"kubernetes.io/projected/05c9b0d5-8464-4769-bb43-685213c34f16-kube-api-access-gg8l8\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987007 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-metrics\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987038 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkf9c\" (UniqueName: \"kubernetes.io/projected/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-kube-api-access-nkf9c\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987062 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8z4w\" (UniqueName: \"kubernetes.io/projected/5c791596-99d9-4d8f-ba02-c4b866a007a4-kube-api-access-l8z4w\") pod \"frr-k8s-webhook-server-7df86c4f6c-6sfz7\" (UID: \"5c791596-99d9-4d8f-ba02-c4b866a007a4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987091 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-sockets\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987121 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05c9b0d5-8464-4769-bb43-685213c34f16-metallb-excludel2\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987146 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-metrics-certs\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987168 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-startup\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987578 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-reloader\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987603 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-metrics\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987938 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-sockets\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.987974 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-conf\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.988606 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-frr-startup\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.989347 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.989446 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.989441 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.991431 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lhw8f" Jan 29 09:20:28 crc kubenswrapper[4771]: I0129 09:20:28.997234 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-mgllh"] Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.000197 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-metrics-certs\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.009398 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.018072 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.022775 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mgllh"] Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.024308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkf9c\" (UniqueName: \"kubernetes.io/projected/8579bfb4-69ea-4f49-aefd-46082a0d7eb0-kube-api-access-nkf9c\") pod \"frr-k8s-f4cw8\" (UID: \"8579bfb4-69ea-4f49-aefd-46082a0d7eb0\") " pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.089769 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8l8\" (UniqueName: \"kubernetes.io/projected/05c9b0d5-8464-4769-bb43-685213c34f16-kube-api-access-gg8l8\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.089840 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8z4w\" (UniqueName: \"kubernetes.io/projected/5c791596-99d9-4d8f-ba02-c4b866a007a4-kube-api-access-l8z4w\") pod \"frr-k8s-webhook-server-7df86c4f6c-6sfz7\" (UID: \"5c791596-99d9-4d8f-ba02-c4b866a007a4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.089913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05c9b0d5-8464-4769-bb43-685213c34f16-metallb-excludel2\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.090887 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05c9b0d5-8464-4769-bb43-685213c34f16-metallb-excludel2\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.090971 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1bccade-3951-4e67-9078-70e904be5b4c-cert\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.091036 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.091087 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1bccade-3951-4e67-9078-70e904be5b4c-metrics-certs\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.091107 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-metrics-certs\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: E0129 09:20:29.091158 4771 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 09:20:29 crc kubenswrapper[4771]: E0129 09:20:29.091223 4771 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 09:20:29 crc kubenswrapper[4771]: E0129 09:20:29.091235 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist podName:05c9b0d5-8464-4769-bb43-685213c34f16 nodeName:}" failed. No retries permitted until 2026-01-29 09:20:29.591215337 +0000 UTC m=+849.714055564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist") pod "speaker-44br2" (UID: "05c9b0d5-8464-4769-bb43-685213c34f16") : secret "metallb-memberlist" not found Jan 29 09:20:29 crc kubenswrapper[4771]: E0129 09:20:29.091271 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-metrics-certs podName:05c9b0d5-8464-4769-bb43-685213c34f16 nodeName:}" failed. No retries permitted until 2026-01-29 09:20:29.591256318 +0000 UTC m=+849.714096545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-metrics-certs") pod "speaker-44br2" (UID: "05c9b0d5-8464-4769-bb43-685213c34f16") : secret "speaker-certs-secret" not found Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.091297 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c791596-99d9-4d8f-ba02-c4b866a007a4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6sfz7\" (UID: \"5c791596-99d9-4d8f-ba02-c4b866a007a4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.091340 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm6sn\" (UniqueName: \"kubernetes.io/projected/a1bccade-3951-4e67-9078-70e904be5b4c-kube-api-access-rm6sn\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.096786 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c791596-99d9-4d8f-ba02-c4b866a007a4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6sfz7\" (UID: \"5c791596-99d9-4d8f-ba02-c4b866a007a4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.109351 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8z4w\" (UniqueName: \"kubernetes.io/projected/5c791596-99d9-4d8f-ba02-c4b866a007a4-kube-api-access-l8z4w\") pod \"frr-k8s-webhook-server-7df86c4f6c-6sfz7\" (UID: \"5c791596-99d9-4d8f-ba02-c4b866a007a4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.109918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8l8\" (UniqueName: \"kubernetes.io/projected/05c9b0d5-8464-4769-bb43-685213c34f16-kube-api-access-gg8l8\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.178941 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.192796 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1bccade-3951-4e67-9078-70e904be5b4c-cert\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.192982 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1bccade-3951-4e67-9078-70e904be5b4c-metrics-certs\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.193028 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm6sn\" (UniqueName: \"kubernetes.io/projected/a1bccade-3951-4e67-9078-70e904be5b4c-kube-api-access-rm6sn\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.197420 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1bccade-3951-4e67-9078-70e904be5b4c-metrics-certs\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.197816 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.207981 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1bccade-3951-4e67-9078-70e904be5b4c-cert\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.208459 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.213485 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm6sn\" (UniqueName: \"kubernetes.io/projected/a1bccade-3951-4e67-9078-70e904be5b4c-kube-api-access-rm6sn\") pod \"controller-6968d8fdc4-mgllh\" (UID: \"a1bccade-3951-4e67-9078-70e904be5b4c\") " pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.362868 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.420902 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7"] Jan 29 09:20:29 crc kubenswrapper[4771]: W0129 09:20:29.435282 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c791596_99d9_4d8f_ba02_c4b866a007a4.slice/crio-7803a6f65061ce4e06cf41174e4d9eaa037f743ff5bb1819d6be2b2b5ff2b165 WatchSource:0}: Error finding container 7803a6f65061ce4e06cf41174e4d9eaa037f743ff5bb1819d6be2b2b5ff2b165: Status 404 returned error can't find the container with id 7803a6f65061ce4e06cf41174e4d9eaa037f743ff5bb1819d6be2b2b5ff2b165 Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.561585 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mgllh"] Jan 29 09:20:29 crc kubenswrapper[4771]: W0129 09:20:29.570111 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1bccade_3951_4e67_9078_70e904be5b4c.slice/crio-5b81667ccff202fd8e8ef025a4af5bffeb2e340a550a22d930d99149bd940a2a WatchSource:0}: Error finding container 5b81667ccff202fd8e8ef025a4af5bffeb2e340a550a22d930d99149bd940a2a: Status 404 returned error can't find the container with id 5b81667ccff202fd8e8ef025a4af5bffeb2e340a550a22d930d99149bd940a2a Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.607676 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.607745 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-metrics-certs\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: E0129 09:20:29.607811 4771 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 09:20:29 crc kubenswrapper[4771]: E0129 09:20:29.607876 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist podName:05c9b0d5-8464-4769-bb43-685213c34f16 nodeName:}" failed. No retries permitted until 2026-01-29 09:20:30.607859394 +0000 UTC m=+850.730699621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist") pod "speaker-44br2" (UID: "05c9b0d5-8464-4769-bb43-685213c34f16") : secret "metallb-memberlist" not found Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.613585 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-metrics-certs\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.777600 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" event={"ID":"5c791596-99d9-4d8f-ba02-c4b866a007a4","Type":"ContainerStarted","Data":"7803a6f65061ce4e06cf41174e4d9eaa037f743ff5bb1819d6be2b2b5ff2b165"} Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.783991 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mgllh" event={"ID":"a1bccade-3951-4e67-9078-70e904be5b4c","Type":"ContainerStarted","Data":"d624c5a9e75cf3ed32846ad83c37315ee2cbcc1b253912a6827c0250124af5cb"} Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.784045 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mgllh" event={"ID":"a1bccade-3951-4e67-9078-70e904be5b4c","Type":"ContainerStarted","Data":"5b81667ccff202fd8e8ef025a4af5bffeb2e340a550a22d930d99149bd940a2a"} Jan 29 09:20:29 crc kubenswrapper[4771]: I0129 09:20:29.785406 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerStarted","Data":"618605dac3c05946d78171d5b968abc17b9dfb2ba0f74ba9ca15047ce4f930b1"} Jan 29 09:20:30 crc kubenswrapper[4771]: I0129 09:20:30.621364 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:30 crc kubenswrapper[4771]: I0129 09:20:30.628389 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05c9b0d5-8464-4769-bb43-685213c34f16-memberlist\") pod \"speaker-44br2\" (UID: \"05c9b0d5-8464-4769-bb43-685213c34f16\") " pod="metallb-system/speaker-44br2" Jan 29 09:20:30 crc kubenswrapper[4771]: I0129 09:20:30.796146 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mgllh" event={"ID":"a1bccade-3951-4e67-9078-70e904be5b4c","Type":"ContainerStarted","Data":"2cda028878c9316d2209a37a5643bfd88f761d8e8c2ba944fc2b3dcb058725de"} Jan 29 09:20:30 crc kubenswrapper[4771]: I0129 09:20:30.796293 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:30 crc kubenswrapper[4771]: I0129 09:20:30.831611 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-mgllh" podStartSLOduration=2.831587523 podStartE2EDuration="2.831587523s" podCreationTimestamp="2026-01-29 09:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:20:30.821458599 +0000 UTC m=+850.944298856" watchObservedRunningTime="2026-01-29 09:20:30.831587523 +0000 UTC m=+850.954427750" Jan 29 09:20:30 crc kubenswrapper[4771]: I0129 09:20:30.851470 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-44br2" Jan 29 09:20:31 crc kubenswrapper[4771]: I0129 09:20:31.811796 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-44br2" event={"ID":"05c9b0d5-8464-4769-bb43-685213c34f16","Type":"ContainerStarted","Data":"87f56b183fd97f7936f4eb563c2b68fdf3ab7acd4f2f21a5bf5d1d52ef74e548"} Jan 29 09:20:31 crc kubenswrapper[4771]: I0129 09:20:31.812877 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-44br2" event={"ID":"05c9b0d5-8464-4769-bb43-685213c34f16","Type":"ContainerStarted","Data":"d36a4cc4fb50021a0f59647baeb300baf315d3d68361d11d6278178cb3a1acd3"} Jan 29 09:20:31 crc kubenswrapper[4771]: I0129 09:20:31.812989 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-44br2" event={"ID":"05c9b0d5-8464-4769-bb43-685213c34f16","Type":"ContainerStarted","Data":"74567452e569a51d6963f8d6a4d07507219b77d841f5df59fa4031e938a77545"} Jan 29 09:20:31 crc kubenswrapper[4771]: I0129 09:20:31.813085 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-44br2" Jan 29 09:20:31 crc kubenswrapper[4771]: I0129 09:20:31.851594 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-44br2" podStartSLOduration=3.851575595 podStartE2EDuration="3.851575595s" podCreationTimestamp="2026-01-29 09:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:20:31.835300445 +0000 UTC m=+851.958140662" watchObservedRunningTime="2026-01-29 09:20:31.851575595 +0000 UTC m=+851.974415822" Jan 29 09:20:37 crc kubenswrapper[4771]: I0129 09:20:37.903078 4771 generic.go:334] "Generic (PLEG): container finished" podID="8579bfb4-69ea-4f49-aefd-46082a0d7eb0" containerID="f4d9e9e41bf25fb3711063c1e5767be2e78b8236d062e8d06ad64ad99c65e3d2" exitCode=0 Jan 29 09:20:37 crc kubenswrapper[4771]: I0129 09:20:37.903193 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerDied","Data":"f4d9e9e41bf25fb3711063c1e5767be2e78b8236d062e8d06ad64ad99c65e3d2"} Jan 29 09:20:37 crc kubenswrapper[4771]: I0129 09:20:37.912407 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" event={"ID":"5c791596-99d9-4d8f-ba02-c4b866a007a4","Type":"ContainerStarted","Data":"456c2155d8f0135126c8d041c86e4ae0fc8f2dd7febca9ef976f3d26416b1708"} Jan 29 09:20:37 crc kubenswrapper[4771]: I0129 09:20:37.912561 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:38 crc kubenswrapper[4771]: I0129 09:20:38.922315 4771 generic.go:334] "Generic (PLEG): container finished" podID="8579bfb4-69ea-4f49-aefd-46082a0d7eb0" containerID="f6e57dc9556499890287206397b3d8164b032ff33e4e24bb7fa2843bc5f39e57" exitCode=0 Jan 29 09:20:38 crc kubenswrapper[4771]: I0129 09:20:38.922405 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerDied","Data":"f6e57dc9556499890287206397b3d8164b032ff33e4e24bb7fa2843bc5f39e57"} Jan 29 09:20:38 crc kubenswrapper[4771]: I0129 09:20:38.951658 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" podStartSLOduration=2.7105659060000002 podStartE2EDuration="10.951637508s" podCreationTimestamp="2026-01-29 09:20:28 +0000 UTC" firstStartedPulling="2026-01-29 09:20:29.447362269 +0000 UTC m=+849.570202496" lastFinishedPulling="2026-01-29 09:20:37.688433871 +0000 UTC m=+857.811274098" observedRunningTime="2026-01-29 09:20:37.979018358 +0000 UTC m=+858.101858605" watchObservedRunningTime="2026-01-29 09:20:38.951637508 +0000 UTC m=+859.074477735" Jan 29 09:20:39 crc kubenswrapper[4771]: I0129 09:20:39.368722 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-mgllh" Jan 29 09:20:39 crc kubenswrapper[4771]: I0129 09:20:39.932348 4771 generic.go:334] "Generic (PLEG): container finished" podID="8579bfb4-69ea-4f49-aefd-46082a0d7eb0" containerID="59b8ae5891ec151751959cb9071777de0a47545f9b54e426fc99426d14f3da9d" exitCode=0 Jan 29 09:20:39 crc kubenswrapper[4771]: I0129 09:20:39.932454 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerDied","Data":"59b8ae5891ec151751959cb9071777de0a47545f9b54e426fc99426d14f3da9d"} Jan 29 09:20:40 crc kubenswrapper[4771]: I0129 09:20:40.941974 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerStarted","Data":"e9d0311b307db3c732b4c56327cff844a2aa7843e54132c25d5c471f443044d7"} Jan 29 09:20:40 crc kubenswrapper[4771]: I0129 09:20:40.942034 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerStarted","Data":"217d658522036bc4c5c84c4fac8adcbec4b71b93f4323411ae80cda0032164d5"} Jan 29 09:20:40 crc kubenswrapper[4771]: I0129 09:20:40.942056 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerStarted","Data":"118cac30db4f5c937684229b921e73ba47406d225c0976c399e9d6bcdf07a64e"} Jan 29 09:20:41 crc kubenswrapper[4771]: I0129 09:20:41.976931 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerStarted","Data":"b0fc061d6816f2db7cb85d487a5de76a7fa4b0cf5381adcd568368e9e5c546f0"} Jan 29 09:20:41 crc kubenswrapper[4771]: I0129 09:20:41.977393 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerStarted","Data":"652424da1d9d65f911efa5f6c661b386d864bb6061ba1ae9274245b9f5ae249b"} Jan 29 09:20:41 crc kubenswrapper[4771]: I0129 09:20:41.977409 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-f4cw8" event={"ID":"8579bfb4-69ea-4f49-aefd-46082a0d7eb0","Type":"ContainerStarted","Data":"a61a9d599fd21dd13b126296019b937a7f4564962c816237a8f9160511539240"} Jan 29 09:20:41 crc kubenswrapper[4771]: I0129 09:20:41.977427 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:42 crc kubenswrapper[4771]: I0129 09:20:42.001859 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-f4cw8" podStartSLOduration=5.695237246 podStartE2EDuration="14.001838263s" podCreationTimestamp="2026-01-29 09:20:28 +0000 UTC" firstStartedPulling="2026-01-29 09:20:29.34731314 +0000 UTC m=+849.470153367" lastFinishedPulling="2026-01-29 09:20:37.653914157 +0000 UTC m=+857.776754384" observedRunningTime="2026-01-29 09:20:42.00136717 +0000 UTC m=+862.124207407" watchObservedRunningTime="2026-01-29 09:20:42.001838263 +0000 UTC m=+862.124678490" Jan 29 09:20:44 crc kubenswrapper[4771]: I0129 09:20:44.179997 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:44 crc kubenswrapper[4771]: I0129 09:20:44.236439 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:20:49 crc kubenswrapper[4771]: I0129 09:20:49.213406 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6sfz7" Jan 29 09:20:50 crc kubenswrapper[4771]: I0129 09:20:50.855876 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-44br2" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.021214 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xs5n2"] Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.024438 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.027323 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.027869 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.035143 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-w6d7p" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.036262 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xs5n2"] Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.222744 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg9pp\" (UniqueName: \"kubernetes.io/projected/4caf16d8-dac6-4281-96a5-97e82d2a828f-kube-api-access-wg9pp\") pod \"openstack-operator-index-xs5n2\" (UID: \"4caf16d8-dac6-4281-96a5-97e82d2a828f\") " pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.324682 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg9pp\" (UniqueName: \"kubernetes.io/projected/4caf16d8-dac6-4281-96a5-97e82d2a828f-kube-api-access-wg9pp\") pod \"openstack-operator-index-xs5n2\" (UID: \"4caf16d8-dac6-4281-96a5-97e82d2a828f\") " pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.344588 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg9pp\" (UniqueName: \"kubernetes.io/projected/4caf16d8-dac6-4281-96a5-97e82d2a828f-kube-api-access-wg9pp\") pod \"openstack-operator-index-xs5n2\" (UID: \"4caf16d8-dac6-4281-96a5-97e82d2a828f\") " pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.387515 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:20:57 crc kubenswrapper[4771]: I0129 09:20:57.804799 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xs5n2"] Jan 29 09:20:57 crc kubenswrapper[4771]: W0129 09:20:57.811747 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4caf16d8_dac6_4281_96a5_97e82d2a828f.slice/crio-fee29385ae887ab161728d7e6c4a2e432cf45603a89055099c3e91c9cd59cd8f WatchSource:0}: Error finding container fee29385ae887ab161728d7e6c4a2e432cf45603a89055099c3e91c9cd59cd8f: Status 404 returned error can't find the container with id fee29385ae887ab161728d7e6c4a2e432cf45603a89055099c3e91c9cd59cd8f Jan 29 09:20:58 crc kubenswrapper[4771]: I0129 09:20:58.098215 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xs5n2" event={"ID":"4caf16d8-dac6-4281-96a5-97e82d2a828f","Type":"ContainerStarted","Data":"fee29385ae887ab161728d7e6c4a2e432cf45603a89055099c3e91c9cd59cd8f"} Jan 29 09:20:59 crc kubenswrapper[4771]: I0129 09:20:59.185861 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-f4cw8" Jan 29 09:21:01 crc kubenswrapper[4771]: I0129 09:21:01.118918 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xs5n2" event={"ID":"4caf16d8-dac6-4281-96a5-97e82d2a828f","Type":"ContainerStarted","Data":"b1850d364489744631598995a599fe56de755e870a600e410d739aa6498d4d8c"} Jan 29 09:21:01 crc kubenswrapper[4771]: I0129 09:21:01.144439 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xs5n2" podStartSLOduration=1.845085045 podStartE2EDuration="4.144414852s" podCreationTimestamp="2026-01-29 09:20:57 +0000 UTC" firstStartedPulling="2026-01-29 09:20:57.814085414 +0000 UTC m=+877.936925681" lastFinishedPulling="2026-01-29 09:21:00.113415261 +0000 UTC m=+880.236255488" observedRunningTime="2026-01-29 09:21:01.142386427 +0000 UTC m=+881.265226654" watchObservedRunningTime="2026-01-29 09:21:01.144414852 +0000 UTC m=+881.267255079" Jan 29 09:21:07 crc kubenswrapper[4771]: I0129 09:21:07.388353 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:21:07 crc kubenswrapper[4771]: I0129 09:21:07.389125 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:21:07 crc kubenswrapper[4771]: I0129 09:21:07.424053 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:21:08 crc kubenswrapper[4771]: I0129 09:21:08.188392 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xs5n2" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.464167 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7"] Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.466087 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.468037 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fjgck" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.485035 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7"] Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.583987 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-util\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.584100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprsg\" (UniqueName: \"kubernetes.io/projected/49bb4d23-3c87-4d67-adac-18be6e729790-kube-api-access-lprsg\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.584128 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-bundle\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.685558 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lprsg\" (UniqueName: \"kubernetes.io/projected/49bb4d23-3c87-4d67-adac-18be6e729790-kube-api-access-lprsg\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.685628 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-bundle\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.685728 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-util\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.686645 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-bundle\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.686801 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-util\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.705244 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lprsg\" (UniqueName: \"kubernetes.io/projected/49bb4d23-3c87-4d67-adac-18be6e729790-kube-api-access-lprsg\") pod \"93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:14 crc kubenswrapper[4771]: I0129 09:21:14.785928 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:15 crc kubenswrapper[4771]: I0129 09:21:15.128619 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7"] Jan 29 09:21:15 crc kubenswrapper[4771]: W0129 09:21:15.134001 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49bb4d23_3c87_4d67_adac_18be6e729790.slice/crio-4011b8ac33af9bdc4fb76c99d621b6c31b28e559ed31d9072a3ad05a347768a0 WatchSource:0}: Error finding container 4011b8ac33af9bdc4fb76c99d621b6c31b28e559ed31d9072a3ad05a347768a0: Status 404 returned error can't find the container with id 4011b8ac33af9bdc4fb76c99d621b6c31b28e559ed31d9072a3ad05a347768a0 Jan 29 09:21:15 crc kubenswrapper[4771]: I0129 09:21:15.208534 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" event={"ID":"49bb4d23-3c87-4d67-adac-18be6e729790","Type":"ContainerStarted","Data":"4011b8ac33af9bdc4fb76c99d621b6c31b28e559ed31d9072a3ad05a347768a0"} Jan 29 09:21:16 crc kubenswrapper[4771]: I0129 09:21:16.219344 4771 generic.go:334] "Generic (PLEG): container finished" podID="49bb4d23-3c87-4d67-adac-18be6e729790" containerID="ee4c9f1497e1d5fbeb8c03c1f00947b6bc582af238287a11a8b3449fdd0ae3ca" exitCode=0 Jan 29 09:21:16 crc kubenswrapper[4771]: I0129 09:21:16.219414 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" event={"ID":"49bb4d23-3c87-4d67-adac-18be6e729790","Type":"ContainerDied","Data":"ee4c9f1497e1d5fbeb8c03c1f00947b6bc582af238287a11a8b3449fdd0ae3ca"} Jan 29 09:21:17 crc kubenswrapper[4771]: I0129 09:21:17.230832 4771 generic.go:334] "Generic (PLEG): container finished" podID="49bb4d23-3c87-4d67-adac-18be6e729790" containerID="6db079d82ab3bab84a494716cea6d90326468172108ad26b1f62fa08f4313b53" exitCode=0 Jan 29 09:21:17 crc kubenswrapper[4771]: I0129 09:21:17.230895 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" event={"ID":"49bb4d23-3c87-4d67-adac-18be6e729790","Type":"ContainerDied","Data":"6db079d82ab3bab84a494716cea6d90326468172108ad26b1f62fa08f4313b53"} Jan 29 09:21:18 crc kubenswrapper[4771]: I0129 09:21:18.239528 4771 generic.go:334] "Generic (PLEG): container finished" podID="49bb4d23-3c87-4d67-adac-18be6e729790" containerID="572454c181026113dcfa3d7d4fe44f73afd87a5ea5a97d81f57df5e81f105793" exitCode=0 Jan 29 09:21:18 crc kubenswrapper[4771]: I0129 09:21:18.239610 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" event={"ID":"49bb4d23-3c87-4d67-adac-18be6e729790","Type":"ContainerDied","Data":"572454c181026113dcfa3d7d4fe44f73afd87a5ea5a97d81f57df5e81f105793"} Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.568003 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.657831 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-bundle\") pod \"49bb4d23-3c87-4d67-adac-18be6e729790\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.658062 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-util\") pod \"49bb4d23-3c87-4d67-adac-18be6e729790\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.658090 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lprsg\" (UniqueName: \"kubernetes.io/projected/49bb4d23-3c87-4d67-adac-18be6e729790-kube-api-access-lprsg\") pod \"49bb4d23-3c87-4d67-adac-18be6e729790\" (UID: \"49bb4d23-3c87-4d67-adac-18be6e729790\") " Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.658883 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-bundle" (OuterVolumeSpecName: "bundle") pod "49bb4d23-3c87-4d67-adac-18be6e729790" (UID: "49bb4d23-3c87-4d67-adac-18be6e729790"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.666962 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bb4d23-3c87-4d67-adac-18be6e729790-kube-api-access-lprsg" (OuterVolumeSpecName: "kube-api-access-lprsg") pod "49bb4d23-3c87-4d67-adac-18be6e729790" (UID: "49bb4d23-3c87-4d67-adac-18be6e729790"). InnerVolumeSpecName "kube-api-access-lprsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.674215 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-util" (OuterVolumeSpecName: "util") pod "49bb4d23-3c87-4d67-adac-18be6e729790" (UID: "49bb4d23-3c87-4d67-adac-18be6e729790"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.760272 4771 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-util\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.760345 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lprsg\" (UniqueName: \"kubernetes.io/projected/49bb4d23-3c87-4d67-adac-18be6e729790-kube-api-access-lprsg\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:19 crc kubenswrapper[4771]: I0129 09:21:19.760359 4771 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49bb4d23-3c87-4d67-adac-18be6e729790-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:20 crc kubenswrapper[4771]: I0129 09:21:20.256037 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" event={"ID":"49bb4d23-3c87-4d67-adac-18be6e729790","Type":"ContainerDied","Data":"4011b8ac33af9bdc4fb76c99d621b6c31b28e559ed31d9072a3ad05a347768a0"} Jan 29 09:21:20 crc kubenswrapper[4771]: I0129 09:21:20.256098 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4011b8ac33af9bdc4fb76c99d621b6c31b28e559ed31d9072a3ad05a347768a0" Jan 29 09:21:20 crc kubenswrapper[4771]: I0129 09:21:20.256115 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.669213 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv"] Jan 29 09:21:22 crc kubenswrapper[4771]: E0129 09:21:22.670252 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bb4d23-3c87-4d67-adac-18be6e729790" containerName="util" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.670271 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bb4d23-3c87-4d67-adac-18be6e729790" containerName="util" Jan 29 09:21:22 crc kubenswrapper[4771]: E0129 09:21:22.670304 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bb4d23-3c87-4d67-adac-18be6e729790" containerName="pull" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.670314 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bb4d23-3c87-4d67-adac-18be6e729790" containerName="pull" Jan 29 09:21:22 crc kubenswrapper[4771]: E0129 09:21:22.670329 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bb4d23-3c87-4d67-adac-18be6e729790" containerName="extract" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.670337 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bb4d23-3c87-4d67-adac-18be6e729790" containerName="extract" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.670470 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bb4d23-3c87-4d67-adac-18be6e729790" containerName="extract" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.671045 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.673479 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-2lhfd" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.698120 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv"] Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.808556 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4dq\" (UniqueName: \"kubernetes.io/projected/ca41b7ae-086c-41c0-b397-3239655e4d1d-kube-api-access-zz4dq\") pod \"openstack-operator-controller-init-5f8b9f866c-gltlv\" (UID: \"ca41b7ae-086c-41c0-b397-3239655e4d1d\") " pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.909833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz4dq\" (UniqueName: \"kubernetes.io/projected/ca41b7ae-086c-41c0-b397-3239655e4d1d-kube-api-access-zz4dq\") pod \"openstack-operator-controller-init-5f8b9f866c-gltlv\" (UID: \"ca41b7ae-086c-41c0-b397-3239655e4d1d\") " pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" Jan 29 09:21:22 crc kubenswrapper[4771]: I0129 09:21:22.934311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz4dq\" (UniqueName: \"kubernetes.io/projected/ca41b7ae-086c-41c0-b397-3239655e4d1d-kube-api-access-zz4dq\") pod \"openstack-operator-controller-init-5f8b9f866c-gltlv\" (UID: \"ca41b7ae-086c-41c0-b397-3239655e4d1d\") " pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" Jan 29 09:21:23 crc kubenswrapper[4771]: I0129 09:21:23.023657 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" Jan 29 09:21:23 crc kubenswrapper[4771]: I0129 09:21:23.415469 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv"] Jan 29 09:21:23 crc kubenswrapper[4771]: W0129 09:21:23.420965 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca41b7ae_086c_41c0_b397_3239655e4d1d.slice/crio-833d0a2b621584d5937991a83a651b0cb503a20e07c16829017928fe3d3aeb40 WatchSource:0}: Error finding container 833d0a2b621584d5937991a83a651b0cb503a20e07c16829017928fe3d3aeb40: Status 404 returned error can't find the container with id 833d0a2b621584d5937991a83a651b0cb503a20e07c16829017928fe3d3aeb40 Jan 29 09:21:24 crc kubenswrapper[4771]: I0129 09:21:24.288228 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" event={"ID":"ca41b7ae-086c-41c0-b397-3239655e4d1d","Type":"ContainerStarted","Data":"833d0a2b621584d5937991a83a651b0cb503a20e07c16829017928fe3d3aeb40"} Jan 29 09:21:28 crc kubenswrapper[4771]: I0129 09:21:28.323207 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" event={"ID":"ca41b7ae-086c-41c0-b397-3239655e4d1d","Type":"ContainerStarted","Data":"005399eb2d2a17793c1adc96a2908ebff8ae687c9ab368f8cc10b3bf68ae9dc2"} Jan 29 09:21:28 crc kubenswrapper[4771]: I0129 09:21:28.323674 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" Jan 29 09:21:28 crc kubenswrapper[4771]: I0129 09:21:28.371236 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" podStartSLOduration=2.097091298 podStartE2EDuration="6.371206527s" podCreationTimestamp="2026-01-29 09:21:22 +0000 UTC" firstStartedPulling="2026-01-29 09:21:23.427077099 +0000 UTC m=+903.549917336" lastFinishedPulling="2026-01-29 09:21:27.701192338 +0000 UTC m=+907.824032565" observedRunningTime="2026-01-29 09:21:28.352537991 +0000 UTC m=+908.475378238" watchObservedRunningTime="2026-01-29 09:21:28.371206527 +0000 UTC m=+908.494046774" Jan 29 09:21:33 crc kubenswrapper[4771]: I0129 09:21:33.026285 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5f8b9f866c-gltlv" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.449739 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.451323 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.455678 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lj4zr" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.466433 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.473634 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.475079 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.477821 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-j6vxt" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.480092 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.481153 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.482687 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-f5ns5" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.569561 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.572582 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw2ff\" (UniqueName: \"kubernetes.io/projected/6221aa48-bc7d-4a2f-9897-41dae47815e7-kube-api-access-hw2ff\") pod \"cinder-operator-controller-manager-8d874c8fc-w5l7q\" (UID: \"6221aa48-bc7d-4a2f-9897-41dae47815e7\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.572647 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5476\" (UniqueName: \"kubernetes.io/projected/b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e-kube-api-access-x5476\") pod \"designate-operator-controller-manager-6d9697b7f4-kzwjr\" (UID: \"b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.575962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24nq8\" (UniqueName: \"kubernetes.io/projected/742db07e-b8fa-472a-824c-ce57c4e3bca5-kube-api-access-24nq8\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-t96kk\" (UID: \"742db07e-b8fa-472a-824c-ce57c4e3bca5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.581769 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.583158 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.592629 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.592667 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zsf6l" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.599901 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.601039 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.607340 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qghnk" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.615062 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.623805 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.633793 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.635054 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.639295 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wvjtg" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.672774 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.673820 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.681708 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.681761 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxb7\" (UniqueName: \"kubernetes.io/projected/29710697-a286-413b-a7ce-01631b4cc6de-kube-api-access-xfxb7\") pod \"glance-operator-controller-manager-8886f4c47-59xvx\" (UID: \"29710697-a286-413b-a7ce-01631b4cc6de\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.681837 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw2ff\" (UniqueName: \"kubernetes.io/projected/6221aa48-bc7d-4a2f-9897-41dae47815e7-kube-api-access-hw2ff\") pod \"cinder-operator-controller-manager-8d874c8fc-w5l7q\" (UID: \"6221aa48-bc7d-4a2f-9897-41dae47815e7\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.681869 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5476\" (UniqueName: \"kubernetes.io/projected/b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e-kube-api-access-x5476\") pod \"designate-operator-controller-manager-6d9697b7f4-kzwjr\" (UID: \"b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.681933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24nq8\" (UniqueName: \"kubernetes.io/projected/742db07e-b8fa-472a-824c-ce57c4e3bca5-kube-api-access-24nq8\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-t96kk\" (UID: \"742db07e-b8fa-472a-824c-ce57c4e3bca5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.682906 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-v2dzs" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.692223 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.704933 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.706307 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.708580 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gx6c5" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.721125 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.722450 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24nq8\" (UniqueName: \"kubernetes.io/projected/742db07e-b8fa-472a-824c-ce57c4e3bca5-kube-api-access-24nq8\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-t96kk\" (UID: \"742db07e-b8fa-472a-824c-ce57c4e3bca5\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.726459 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5476\" (UniqueName: \"kubernetes.io/projected/b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e-kube-api-access-x5476\") pod \"designate-operator-controller-manager-6d9697b7f4-kzwjr\" (UID: \"b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.740259 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw2ff\" (UniqueName: \"kubernetes.io/projected/6221aa48-bc7d-4a2f-9897-41dae47815e7-kube-api-access-hw2ff\") pod \"cinder-operator-controller-manager-8d874c8fc-w5l7q\" (UID: \"6221aa48-bc7d-4a2f-9897-41dae47815e7\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.740592 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.754402 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.774130 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-l59vf" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.785474 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxfnn\" (UniqueName: \"kubernetes.io/projected/e0d87d52-0e91-4f3f-bbf2-228b57bbcff7-kube-api-access-kxfnn\") pod \"heat-operator-controller-manager-69d6db494d-2cq8m\" (UID: \"e0d87d52-0e91-4f3f-bbf2-228b57bbcff7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.785518 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s98pf\" (UniqueName: \"kubernetes.io/projected/7f152534-d323-4bdc-9d0e-86e673b65a56-kube-api-access-s98pf\") pod \"horizon-operator-controller-manager-5fb775575f-xczpv\" (UID: \"7f152534-d323-4bdc-9d0e-86e673b65a56\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.785552 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfrs\" (UniqueName: \"kubernetes.io/projected/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-kube-api-access-9gfrs\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.785583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.785644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxb7\" (UniqueName: \"kubernetes.io/projected/29710697-a286-413b-a7ce-01631b4cc6de-kube-api-access-xfxb7\") pod \"glance-operator-controller-manager-8886f4c47-59xvx\" (UID: \"29710697-a286-413b-a7ce-01631b4cc6de\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.786416 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.787430 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.805083 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.815473 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.818761 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.825841 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c298s"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.827048 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.828304 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxb7\" (UniqueName: \"kubernetes.io/projected/29710697-a286-413b-a7ce-01631b4cc6de-kube-api-access-xfxb7\") pod \"glance-operator-controller-manager-8886f4c47-59xvx\" (UID: \"29710697-a286-413b-a7ce-01631b4cc6de\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.834428 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-prdfn" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.889113 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxfnn\" (UniqueName: \"kubernetes.io/projected/e0d87d52-0e91-4f3f-bbf2-228b57bbcff7-kube-api-access-kxfnn\") pod \"heat-operator-controller-manager-69d6db494d-2cq8m\" (UID: \"e0d87d52-0e91-4f3f-bbf2-228b57bbcff7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.889164 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s98pf\" (UniqueName: \"kubernetes.io/projected/7f152534-d323-4bdc-9d0e-86e673b65a56-kube-api-access-s98pf\") pod \"horizon-operator-controller-manager-5fb775575f-xczpv\" (UID: \"7f152534-d323-4bdc-9d0e-86e673b65a56\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.889208 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gfrs\" (UniqueName: \"kubernetes.io/projected/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-kube-api-access-9gfrs\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.889248 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.889287 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kddrm\" (UniqueName: \"kubernetes.io/projected/8373cf12-3567-409b-ae85-1f530e91c86a-kube-api-access-kddrm\") pod \"keystone-operator-controller-manager-84f48565d4-wfjjb\" (UID: \"8373cf12-3567-409b-ae85-1f530e91c86a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.889320 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djgqf\" (UniqueName: \"kubernetes.io/projected/f9b6f2b9-26dd-44f5-859d-f9a1828d726d-kube-api-access-djgqf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-rjn7t\" (UID: \"f9b6f2b9-26dd-44f5-859d-f9a1828d726d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" Jan 29 09:21:52 crc kubenswrapper[4771]: E0129 09:21:52.889884 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 09:21:52 crc kubenswrapper[4771]: E0129 09:21:52.889980 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert podName:be5b01ce-6d7f-40e9-9e6e-3291fab1d242 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:53.389955778 +0000 UTC m=+933.512796175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert") pod "infra-operator-controller-manager-79955696d6-ltjpd" (UID: "be5b01ce-6d7f-40e9-9e6e-3291fab1d242") : secret "infra-operator-webhook-server-cert" not found Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.924973 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.934065 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c298s"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.934110 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.935043 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8"] Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.935143 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.952553 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lj55x" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.962616 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s98pf\" (UniqueName: \"kubernetes.io/projected/7f152534-d323-4bdc-9d0e-86e673b65a56-kube-api-access-s98pf\") pod \"horizon-operator-controller-manager-5fb775575f-xczpv\" (UID: \"7f152534-d323-4bdc-9d0e-86e673b65a56\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.970356 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxfnn\" (UniqueName: \"kubernetes.io/projected/e0d87d52-0e91-4f3f-bbf2-228b57bbcff7-kube-api-access-kxfnn\") pod \"heat-operator-controller-manager-69d6db494d-2cq8m\" (UID: \"e0d87d52-0e91-4f3f-bbf2-228b57bbcff7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.970440 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.990761 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gfrs\" (UniqueName: \"kubernetes.io/projected/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-kube-api-access-9gfrs\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.991519 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djgqf\" (UniqueName: \"kubernetes.io/projected/f9b6f2b9-26dd-44f5-859d-f9a1828d726d-kube-api-access-djgqf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-rjn7t\" (UID: \"f9b6f2b9-26dd-44f5-859d-f9a1828d726d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.991566 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8xn4\" (UniqueName: \"kubernetes.io/projected/da6fb9cf-4fe9-41a8-a645-0d98a36e9472-kube-api-access-f8xn4\") pod \"manila-operator-controller-manager-7dd968899f-c298s\" (UID: \"da6fb9cf-4fe9-41a8-a645-0d98a36e9472\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" Jan 29 09:21:52 crc kubenswrapper[4771]: I0129 09:21:52.991731 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kddrm\" (UniqueName: \"kubernetes.io/projected/8373cf12-3567-409b-ae85-1f530e91c86a-kube-api-access-kddrm\") pod \"keystone-operator-controller-manager-84f48565d4-wfjjb\" (UID: \"8373cf12-3567-409b-ae85-1f530e91c86a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.018685 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kddrm\" (UniqueName: \"kubernetes.io/projected/8373cf12-3567-409b-ae85-1f530e91c86a-kube-api-access-kddrm\") pod \"keystone-operator-controller-manager-84f48565d4-wfjjb\" (UID: \"8373cf12-3567-409b-ae85-1f530e91c86a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.021074 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.029204 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djgqf\" (UniqueName: \"kubernetes.io/projected/f9b6f2b9-26dd-44f5-859d-f9a1828d726d-kube-api-access-djgqf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-rjn7t\" (UID: \"f9b6f2b9-26dd-44f5-859d-f9a1828d726d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.030110 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.046585 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jmzp6" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.058827 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.090066 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.092670 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8xn4\" (UniqueName: \"kubernetes.io/projected/da6fb9cf-4fe9-41a8-a645-0d98a36e9472-kube-api-access-f8xn4\") pod \"manila-operator-controller-manager-7dd968899f-c298s\" (UID: \"da6fb9cf-4fe9-41a8-a645-0d98a36e9472\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.092780 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz569\" (UniqueName: \"kubernetes.io/projected/013a2529-271b-4c1d-8ac4-3b443a9d1069-kube-api-access-pz569\") pod \"mariadb-operator-controller-manager-67bf948998-qpqh8\" (UID: \"013a2529-271b-4c1d-8ac4-3b443a9d1069\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.143992 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-j8578"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.145556 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.164267 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rf2kc" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.216764 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.246093 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.249726 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvf9j\" (UniqueName: \"kubernetes.io/projected/bbfc6317-5079-4f9f-83e0-9f93970a0710-kube-api-access-jvf9j\") pod \"nova-operator-controller-manager-55bff696bd-j8578\" (UID: \"bbfc6317-5079-4f9f-83e0-9f93970a0710\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.249795 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p2tj\" (UniqueName: \"kubernetes.io/projected/a9c9f6d2-b488-4184-b7dd-46e228737c64-kube-api-access-5p2tj\") pod \"neutron-operator-controller-manager-585dbc889-j2vm7\" (UID: \"a9c9f6d2-b488-4184-b7dd-46e228737c64\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.249830 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz569\" (UniqueName: \"kubernetes.io/projected/013a2529-271b-4c1d-8ac4-3b443a9d1069-kube-api-access-pz569\") pod \"mariadb-operator-controller-manager-67bf948998-qpqh8\" (UID: \"013a2529-271b-4c1d-8ac4-3b443a9d1069\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.273318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8xn4\" (UniqueName: \"kubernetes.io/projected/da6fb9cf-4fe9-41a8-a645-0d98a36e9472-kube-api-access-f8xn4\") pod \"manila-operator-controller-manager-7dd968899f-c298s\" (UID: \"da6fb9cf-4fe9-41a8-a645-0d98a36e9472\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.285555 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-j8578"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.296872 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.399755 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvf9j\" (UniqueName: \"kubernetes.io/projected/bbfc6317-5079-4f9f-83e0-9f93970a0710-kube-api-access-jvf9j\") pod \"nova-operator-controller-manager-55bff696bd-j8578\" (UID: \"bbfc6317-5079-4f9f-83e0-9f93970a0710\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.399833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p2tj\" (UniqueName: \"kubernetes.io/projected/a9c9f6d2-b488-4184-b7dd-46e228737c64-kube-api-access-5p2tj\") pod \"neutron-operator-controller-manager-585dbc889-j2vm7\" (UID: \"a9c9f6d2-b488-4184-b7dd-46e228737c64\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.399898 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:53 crc kubenswrapper[4771]: E0129 09:21:53.400113 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 09:21:53 crc kubenswrapper[4771]: E0129 09:21:53.400188 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert podName:be5b01ce-6d7f-40e9-9e6e-3291fab1d242 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:54.400165591 +0000 UTC m=+934.523005818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert") pod "infra-operator-controller-manager-79955696d6-ltjpd" (UID: "be5b01ce-6d7f-40e9-9e6e-3291fab1d242") : secret "infra-operator-webhook-server-cert" not found Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.402043 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz569\" (UniqueName: \"kubernetes.io/projected/013a2529-271b-4c1d-8ac4-3b443a9d1069-kube-api-access-pz569\") pod \"mariadb-operator-controller-manager-67bf948998-qpqh8\" (UID: \"013a2529-271b-4c1d-8ac4-3b443a9d1069\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.413260 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.414419 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.425449 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hntfn" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.431670 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p2tj\" (UniqueName: \"kubernetes.io/projected/a9c9f6d2-b488-4184-b7dd-46e228737c64-kube-api-access-5p2tj\") pod \"neutron-operator-controller-manager-585dbc889-j2vm7\" (UID: \"a9c9f6d2-b488-4184-b7dd-46e228737c64\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.434773 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.446555 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvf9j\" (UniqueName: \"kubernetes.io/projected/bbfc6317-5079-4f9f-83e0-9f93970a0710-kube-api-access-jvf9j\") pod \"nova-operator-controller-manager-55bff696bd-j8578\" (UID: \"bbfc6317-5079-4f9f-83e0-9f93970a0710\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.465208 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.466427 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.469993 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vbmkt" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.470461 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.482803 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.489866 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.492517 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mlhkt" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.497388 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.509444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7ftp\" (UniqueName: \"kubernetes.io/projected/9ae26fe7-fcd5-4006-aa5d-133b8b91e521-kube-api-access-x7ftp\") pod \"octavia-operator-controller-manager-6687f8d877-q74t2\" (UID: \"9ae26fe7-fcd5-4006-aa5d-133b8b91e521\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.511525 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.519014 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.520150 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.522279 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dwwh2" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.544522 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.545361 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.553031 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.559116 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4kwb4" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.572615 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.573959 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.577296 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dxls9" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.579795 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.588453 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.604587 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.612289 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24xkm\" (UniqueName: \"kubernetes.io/projected/c594b46c-4d8f-4604-a70d-91544ff13805-kube-api-access-24xkm\") pod \"placement-operator-controller-manager-5b964cf4cd-w9c7v\" (UID: \"c594b46c-4d8f-4604-a70d-91544ff13805\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.612388 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.612418 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6x8c\" (UniqueName: \"kubernetes.io/projected/f5c86f6b-dae6-4551-9413-df4e429c0ffa-kube-api-access-l6x8c\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.612452 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzjk5\" (UniqueName: \"kubernetes.io/projected/488821bb-04ee-4c62-b4a3-ac83d74a8919-kube-api-access-xzjk5\") pod \"ovn-operator-controller-manager-788c46999f-mkl7v\" (UID: \"488821bb-04ee-4c62-b4a3-ac83d74a8919\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.612523 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7ftp\" (UniqueName: \"kubernetes.io/projected/9ae26fe7-fcd5-4006-aa5d-133b8b91e521-kube-api-access-x7ftp\") pod \"octavia-operator-controller-manager-6687f8d877-q74t2\" (UID: \"9ae26fe7-fcd5-4006-aa5d-133b8b91e521\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.631237 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.633846 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.639507 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mwg52" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.665900 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.666792 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7ftp\" (UniqueName: \"kubernetes.io/projected/9ae26fe7-fcd5-4006-aa5d-133b8b91e521-kube-api-access-x7ftp\") pod \"octavia-operator-controller-manager-6687f8d877-q74t2\" (UID: \"9ae26fe7-fcd5-4006-aa5d-133b8b91e521\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.681780 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.697119 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-66qbk"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.699214 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.702501 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.705579 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5pqdv" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.710504 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-66qbk"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.713722 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24xkm\" (UniqueName: \"kubernetes.io/projected/c594b46c-4d8f-4604-a70d-91544ff13805-kube-api-access-24xkm\") pod \"placement-operator-controller-manager-5b964cf4cd-w9c7v\" (UID: \"c594b46c-4d8f-4604-a70d-91544ff13805\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.713814 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.713841 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtd7x\" (UniqueName: \"kubernetes.io/projected/c7e92467-0347-42d4-9628-639368c69b80-kube-api-access-mtd7x\") pod \"test-operator-controller-manager-56f8bfcd9f-pkgdd\" (UID: \"c7e92467-0347-42d4-9628-639368c69b80\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.713862 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6x8c\" (UniqueName: \"kubernetes.io/projected/f5c86f6b-dae6-4551-9413-df4e429c0ffa-kube-api-access-l6x8c\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.713882 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tffw2\" (UniqueName: \"kubernetes.io/projected/5c0452ac-093e-45b1-825f-3ba01ed93425-kube-api-access-tffw2\") pod \"telemetry-operator-controller-manager-64b5b76f97-46m5f\" (UID: \"5c0452ac-093e-45b1-825f-3ba01ed93425\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.713909 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzjk5\" (UniqueName: \"kubernetes.io/projected/488821bb-04ee-4c62-b4a3-ac83d74a8919-kube-api-access-xzjk5\") pod \"ovn-operator-controller-manager-788c46999f-mkl7v\" (UID: \"488821bb-04ee-4c62-b4a3-ac83d74a8919\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.713930 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffqq\" (UniqueName: \"kubernetes.io/projected/688a9f5e-ab0d-4975-a033-a8cdf403fd9e-kube-api-access-9ffqq\") pod \"swift-operator-controller-manager-68fc8c869-p7tj6\" (UID: \"688a9f5e-ab0d-4975-a033-a8cdf403fd9e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" Jan 29 09:21:53 crc kubenswrapper[4771]: E0129 09:21:53.714342 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:21:53 crc kubenswrapper[4771]: E0129 09:21:53.714386 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert podName:f5c86f6b-dae6-4551-9413-df4e429c0ffa nodeName:}" failed. No retries permitted until 2026-01-29 09:21:54.214371106 +0000 UTC m=+934.337211333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" (UID: "f5c86f6b-dae6-4551-9413-df4e429c0ffa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.753971 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24xkm\" (UniqueName: \"kubernetes.io/projected/c594b46c-4d8f-4604-a70d-91544ff13805-kube-api-access-24xkm\") pod \"placement-operator-controller-manager-5b964cf4cd-w9c7v\" (UID: \"c594b46c-4d8f-4604-a70d-91544ff13805\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.765842 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzjk5\" (UniqueName: \"kubernetes.io/projected/488821bb-04ee-4c62-b4a3-ac83d74a8919-kube-api-access-xzjk5\") pod \"ovn-operator-controller-manager-788c46999f-mkl7v\" (UID: \"488821bb-04ee-4c62-b4a3-ac83d74a8919\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.783929 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6x8c\" (UniqueName: \"kubernetes.io/projected/f5c86f6b-dae6-4551-9413-df4e429c0ffa-kube-api-access-l6x8c\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.785085 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.815211 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffqq\" (UniqueName: \"kubernetes.io/projected/688a9f5e-ab0d-4975-a033-a8cdf403fd9e-kube-api-access-9ffqq\") pod \"swift-operator-controller-manager-68fc8c869-p7tj6\" (UID: \"688a9f5e-ab0d-4975-a033-a8cdf403fd9e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.815316 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7fbk\" (UniqueName: \"kubernetes.io/projected/5048415a-36d8-47a9-aed1-f7395e309ce3-kube-api-access-m7fbk\") pod \"watcher-operator-controller-manager-564965969-66qbk\" (UID: \"5048415a-36d8-47a9-aed1-f7395e309ce3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.815373 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtd7x\" (UniqueName: \"kubernetes.io/projected/c7e92467-0347-42d4-9628-639368c69b80-kube-api-access-mtd7x\") pod \"test-operator-controller-manager-56f8bfcd9f-pkgdd\" (UID: \"c7e92467-0347-42d4-9628-639368c69b80\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.815406 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tffw2\" (UniqueName: \"kubernetes.io/projected/5c0452ac-093e-45b1-825f-3ba01ed93425-kube-api-access-tffw2\") pod \"telemetry-operator-controller-manager-64b5b76f97-46m5f\" (UID: \"5c0452ac-093e-45b1-825f-3ba01ed93425\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.819789 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.820858 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.823037 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bhpz4" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.826276 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.827646 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.835236 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffqq\" (UniqueName: \"kubernetes.io/projected/688a9f5e-ab0d-4975-a033-a8cdf403fd9e-kube-api-access-9ffqq\") pod \"swift-operator-controller-manager-68fc8c869-p7tj6\" (UID: \"688a9f5e-ab0d-4975-a033-a8cdf403fd9e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.835274 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tffw2\" (UniqueName: \"kubernetes.io/projected/5c0452ac-093e-45b1-825f-3ba01ed93425-kube-api-access-tffw2\") pod \"telemetry-operator-controller-manager-64b5b76f97-46m5f\" (UID: \"5c0452ac-093e-45b1-825f-3ba01ed93425\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.835821 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.837437 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtd7x\" (UniqueName: \"kubernetes.io/projected/c7e92467-0347-42d4-9628-639368c69b80-kube-api-access-mtd7x\") pod \"test-operator-controller-manager-56f8bfcd9f-pkgdd\" (UID: \"c7e92467-0347-42d4-9628-639368c69b80\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.846237 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.849272 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.860141 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.861112 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-w8p8s" Jan 29 09:21:53 crc kubenswrapper[4771]: W0129 09:21:53.865133 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7b0237e_e4f4_4ff1_81f6_3f54c39d6a8e.slice/crio-661495c4374eace67ea6b37b7a8c7a476cf7ca830ae1bd1f4bf47470c3c7cfe9 WatchSource:0}: Error finding container 661495c4374eace67ea6b37b7a8c7a476cf7ca830ae1bd1f4bf47470c3c7cfe9: Status 404 returned error can't find the container with id 661495c4374eace67ea6b37b7a8c7a476cf7ca830ae1bd1f4bf47470c3c7cfe9 Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.887233 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m"] Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.911879 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.916969 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7fbk\" (UniqueName: \"kubernetes.io/projected/5048415a-36d8-47a9-aed1-f7395e309ce3-kube-api-access-m7fbk\") pod \"watcher-operator-controller-manager-564965969-66qbk\" (UID: \"5048415a-36d8-47a9-aed1-f7395e309ce3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.917068 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r95p8\" (UniqueName: \"kubernetes.io/projected/72d8430d-b468-4e7b-a568-bb12c9a4c856-kube-api-access-r95p8\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.917179 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.917254 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mbs\" (UniqueName: \"kubernetes.io/projected/40f4ff01-59ff-4cb1-a683-6e1da9756691-kube-api-access-25mbs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9gz6m\" (UID: \"40f4ff01-59ff-4cb1-a683-6e1da9756691\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.917280 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.953955 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.955243 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7fbk\" (UniqueName: \"kubernetes.io/projected/5048415a-36d8-47a9-aed1-f7395e309ce3-kube-api-access-m7fbk\") pod \"watcher-operator-controller-manager-564965969-66qbk\" (UID: \"5048415a-36d8-47a9-aed1-f7395e309ce3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" Jan 29 09:21:53 crc kubenswrapper[4771]: I0129 09:21:53.993280 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.024011 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r95p8\" (UniqueName: \"kubernetes.io/projected/72d8430d-b468-4e7b-a568-bb12c9a4c856-kube-api-access-r95p8\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.024189 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.024234 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25mbs\" (UniqueName: \"kubernetes.io/projected/40f4ff01-59ff-4cb1-a683-6e1da9756691-kube-api-access-25mbs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9gz6m\" (UID: \"40f4ff01-59ff-4cb1-a683-6e1da9756691\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.024254 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.024405 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.024487 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:54.52444179 +0000 UTC m=+934.647282017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "metrics-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.025380 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.025413 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:54.525404516 +0000 UTC m=+934.648244743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "webhook-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.037835 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.065738 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25mbs\" (UniqueName: \"kubernetes.io/projected/40f4ff01-59ff-4cb1-a683-6e1da9756691-kube-api-access-25mbs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9gz6m\" (UID: \"40f4ff01-59ff-4cb1-a683-6e1da9756691\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.071058 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r95p8\" (UniqueName: \"kubernetes.io/projected/72d8430d-b468-4e7b-a568-bb12c9a4c856-kube-api-access-r95p8\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.107804 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.120445 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.156767 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.168641 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.228840 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.235176 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.235471 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.235532 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert podName:f5c86f6b-dae6-4551-9413-df4e429c0ffa nodeName:}" failed. No retries permitted until 2026-01-29 09:21:55.235511604 +0000 UTC m=+935.358351831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" (UID: "f5c86f6b-dae6-4551-9413-df4e429c0ffa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.262921 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.297816 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.313560 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c298s"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.319087 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb"] Jan 29 09:21:54 crc kubenswrapper[4771]: W0129 09:21:54.336734 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f152534_d323_4bdc_9d0e_86e673b65a56.slice/crio-2ee7fca841e14b839d67af209a5c319fad6ade3c9dc5e92bfef2ed8572e52233 WatchSource:0}: Error finding container 2ee7fca841e14b839d67af209a5c319fad6ade3c9dc5e92bfef2ed8572e52233: Status 404 returned error can't find the container with id 2ee7fca841e14b839d67af209a5c319fad6ade3c9dc5e92bfef2ed8572e52233 Jan 29 09:21:54 crc kubenswrapper[4771]: W0129 09:21:54.360221 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda6fb9cf_4fe9_41a8_a645_0d98a36e9472.slice/crio-d9e62d95b8d98f245e757327a46bf014979cdbaaf229f49f5566db4cb577e310 WatchSource:0}: Error finding container d9e62d95b8d98f245e757327a46bf014979cdbaaf229f49f5566db4cb577e310: Status 404 returned error can't find the container with id d9e62d95b8d98f245e757327a46bf014979cdbaaf229f49f5566db4cb577e310 Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.437884 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.438162 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.438218 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert podName:be5b01ce-6d7f-40e9-9e6e-3291fab1d242 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:56.438199701 +0000 UTC m=+936.561039938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert") pod "infra-operator-controller-manager-79955696d6-ltjpd" (UID: "be5b01ce-6d7f-40e9-9e6e-3291fab1d242") : secret "infra-operator-webhook-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.454520 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.500905 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" event={"ID":"6221aa48-bc7d-4a2f-9897-41dae47815e7","Type":"ContainerStarted","Data":"e3ab471b21810f380132088d98a81c5ae955bf694a02900b912eb9df39b25657"} Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.510925 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" event={"ID":"29710697-a286-413b-a7ce-01631b4cc6de","Type":"ContainerStarted","Data":"0d817eabfd3a6d31d82f97cf611c8afcc45e8fb49a4205a4059e2473d61847b3"} Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.537545 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" event={"ID":"da6fb9cf-4fe9-41a8-a645-0d98a36e9472","Type":"ContainerStarted","Data":"d9e62d95b8d98f245e757327a46bf014979cdbaaf229f49f5566db4cb577e310"} Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.538957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.539034 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.539218 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.539282 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:55.539261448 +0000 UTC m=+935.662101675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "metrics-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.539711 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: E0129 09:21:54.539741 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:55.53973095 +0000 UTC m=+935.662571177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "webhook-server-cert" not found Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.542392 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" event={"ID":"f9b6f2b9-26dd-44f5-859d-f9a1828d726d","Type":"ContainerStarted","Data":"37a0edb3155ddbe768862e8c113a75a3608048ec3c767f874a560d26da353f5a"} Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.550580 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" event={"ID":"b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e","Type":"ContainerStarted","Data":"661495c4374eace67ea6b37b7a8c7a476cf7ca830ae1bd1f4bf47470c3c7cfe9"} Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.566154 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" event={"ID":"8373cf12-3567-409b-ae85-1f530e91c86a","Type":"ContainerStarted","Data":"f090947ba20d8f365bdcc2e12d1728517b5652ffbd0447c11682b9ca84bc1745"} Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.571790 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" event={"ID":"7f152534-d323-4bdc-9d0e-86e673b65a56","Type":"ContainerStarted","Data":"2ee7fca841e14b839d67af209a5c319fad6ade3c9dc5e92bfef2ed8572e52233"} Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.581390 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.588137 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.596161 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8"] Jan 29 09:21:54 crc kubenswrapper[4771]: W0129 09:21:54.614399 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod013a2529_271b_4c1d_8ac4_3b443a9d1069.slice/crio-8ac91f9156eeb3f0dfea743643e9a0797c1faba60e5abc2060a96ed735b1342d WatchSource:0}: Error finding container 8ac91f9156eeb3f0dfea743643e9a0797c1faba60e5abc2060a96ed735b1342d: Status 404 returned error can't find the container with id 8ac91f9156eeb3f0dfea743643e9a0797c1faba60e5abc2060a96ed735b1342d Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.830928 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2"] Jan 29 09:21:54 crc kubenswrapper[4771]: W0129 09:21:54.846069 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbfc6317_5079_4f9f_83e0_9f93970a0710.slice/crio-99ae947678673f58f9c9256950fc89354fd293f68616f11db61934f8aef17779 WatchSource:0}: Error finding container 99ae947678673f58f9c9256950fc89354fd293f68616f11db61934f8aef17779: Status 404 returned error can't find the container with id 99ae947678673f58f9c9256950fc89354fd293f68616f11db61934f8aef17779 Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.847256 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-j8578"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.978007 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v"] Jan 29 09:21:54 crc kubenswrapper[4771]: W0129 09:21:54.982627 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc594b46c_4d8f_4604_a70d_91544ff13805.slice/crio-652ecec0ce94de6bb929e7d3ebc24fa20a0398f742ef8e654ea1a80aa4344ca8 WatchSource:0}: Error finding container 652ecec0ce94de6bb929e7d3ebc24fa20a0398f742ef8e654ea1a80aa4344ca8: Status 404 returned error can't find the container with id 652ecec0ce94de6bb929e7d3ebc24fa20a0398f742ef8e654ea1a80aa4344ca8 Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.986679 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v"] Jan 29 09:21:54 crc kubenswrapper[4771]: I0129 09:21:54.992016 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd"] Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.000510 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f"] Jan 29 09:21:55 crc kubenswrapper[4771]: W0129 09:21:55.012566 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7e92467_0347_42d4_9628_639368c69b80.slice/crio-dfd09f43c2f82cbd3734ada274f47b558ed2cc8d6d099562ed6d69dd7983d677 WatchSource:0}: Error finding container dfd09f43c2f82cbd3734ada274f47b558ed2cc8d6d099562ed6d69dd7983d677: Status 404 returned error can't find the container with id dfd09f43c2f82cbd3734ada274f47b558ed2cc8d6d099562ed6d69dd7983d677 Jan 29 09:21:55 crc kubenswrapper[4771]: W0129 09:21:55.013028 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod488821bb_04ee_4c62_b4a3_ac83d74a8919.slice/crio-675994f72fdc26aaa86cfffa981d436dddae1fc3c5346abf385226ae18ecf7d7 WatchSource:0}: Error finding container 675994f72fdc26aaa86cfffa981d436dddae1fc3c5346abf385226ae18ecf7d7: Status 404 returned error can't find the container with id 675994f72fdc26aaa86cfffa981d436dddae1fc3c5346abf385226ae18ecf7d7 Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.017427 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtd7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-pkgdd_openstack-operators(c7e92467-0347-42d4-9628-639368c69b80): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.018599 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" podUID="c7e92467-0347-42d4-9628-639368c69b80" Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.018876 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xzjk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-mkl7v_openstack-operators(488821bb-04ee-4c62-b4a3-ac83d74a8919): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.020284 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" podUID="488821bb-04ee-4c62-b4a3-ac83d74a8919" Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.106799 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6"] Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.111929 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-66qbk"] Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.124674 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m"] Jan 29 09:21:55 crc kubenswrapper[4771]: W0129 09:21:55.127218 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod688a9f5e_ab0d_4975_a033_a8cdf403fd9e.slice/crio-5328c3131cd573afbddc41ac6ba40e03b2aa75654b06500ae7c38320f5997c0f WatchSource:0}: Error finding container 5328c3131cd573afbddc41ac6ba40e03b2aa75654b06500ae7c38320f5997c0f: Status 404 returned error can't find the container with id 5328c3131cd573afbddc41ac6ba40e03b2aa75654b06500ae7c38320f5997c0f Jan 29 09:21:55 crc kubenswrapper[4771]: W0129 09:21:55.128987 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5048415a_36d8_47a9_aed1_f7395e309ce3.slice/crio-0c679cf3b27b91e8091188a803405337b30926db14bbf9dee24f5600e3f5096a WatchSource:0}: Error finding container 0c679cf3b27b91e8091188a803405337b30926db14bbf9dee24f5600e3f5096a: Status 404 returned error can't find the container with id 0c679cf3b27b91e8091188a803405337b30926db14bbf9dee24f5600e3f5096a Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.133509 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9ffqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-p7tj6_openstack-operators(688a9f5e-ab0d-4975-a033-a8cdf403fd9e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.134802 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" podUID="688a9f5e-ab0d-4975-a033-a8cdf403fd9e" Jan 29 09:21:55 crc kubenswrapper[4771]: W0129 09:21:55.141199 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40f4ff01_59ff_4cb1_a683_6e1da9756691.slice/crio-463204c5925b27e9c731cd4ee152e21a3a609bfe73d4da62f5f009e90f3a12ba WatchSource:0}: Error finding container 463204c5925b27e9c731cd4ee152e21a3a609bfe73d4da62f5f009e90f3a12ba: Status 404 returned error can't find the container with id 463204c5925b27e9c731cd4ee152e21a3a609bfe73d4da62f5f009e90f3a12ba Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.144480 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25mbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9gz6m_openstack-operators(40f4ff01-59ff-4cb1-a683-6e1da9756691): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.146229 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" podUID="40f4ff01-59ff-4cb1-a683-6e1da9756691" Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.249715 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.249926 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.250027 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert podName:f5c86f6b-dae6-4551-9413-df4e429c0ffa nodeName:}" failed. No retries permitted until 2026-01-29 09:21:57.250003669 +0000 UTC m=+937.372844056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" (UID: "f5c86f6b-dae6-4551-9413-df4e429c0ffa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.555403 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.555493 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.555882 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.555881 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.555950 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:57.555932691 +0000 UTC m=+937.678772918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "metrics-server-cert" not found Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.556023 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:21:57.555992063 +0000 UTC m=+937.678832470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "webhook-server-cert" not found Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.583224 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" event={"ID":"488821bb-04ee-4c62-b4a3-ac83d74a8919","Type":"ContainerStarted","Data":"675994f72fdc26aaa86cfffa981d436dddae1fc3c5346abf385226ae18ecf7d7"} Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.584582 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" podUID="488821bb-04ee-4c62-b4a3-ac83d74a8919" Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.588366 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" event={"ID":"9ae26fe7-fcd5-4006-aa5d-133b8b91e521","Type":"ContainerStarted","Data":"d0db0ff2353f3ee875769f0fdfd3ba793b1aee072d20d9b64824953c7863b648"} Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.591529 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" event={"ID":"e0d87d52-0e91-4f3f-bbf2-228b57bbcff7","Type":"ContainerStarted","Data":"96132d72e2b6e5d0ab5e8014f96d5ac95ceebed6d9d680d3dd9673c478d64c48"} Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.594828 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" event={"ID":"5048415a-36d8-47a9-aed1-f7395e309ce3","Type":"ContainerStarted","Data":"0c679cf3b27b91e8091188a803405337b30926db14bbf9dee24f5600e3f5096a"} Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.596415 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" event={"ID":"742db07e-b8fa-472a-824c-ce57c4e3bca5","Type":"ContainerStarted","Data":"2b605e8bceed1e8dd70a4764e597292b5574fb9f51a21cb6eeba8422c1c3db00"} Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.598887 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" event={"ID":"c7e92467-0347-42d4-9628-639368c69b80","Type":"ContainerStarted","Data":"dfd09f43c2f82cbd3734ada274f47b558ed2cc8d6d099562ed6d69dd7983d677"} Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.602560 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" podUID="c7e92467-0347-42d4-9628-639368c69b80" Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.605878 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" event={"ID":"40f4ff01-59ff-4cb1-a683-6e1da9756691","Type":"ContainerStarted","Data":"463204c5925b27e9c731cd4ee152e21a3a609bfe73d4da62f5f009e90f3a12ba"} Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.608282 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" podUID="40f4ff01-59ff-4cb1-a683-6e1da9756691" Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.609564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" event={"ID":"5c0452ac-093e-45b1-825f-3ba01ed93425","Type":"ContainerStarted","Data":"1f464df2db4a710ba5bb59657c8a389da7ddaa73350d086e5109cec54b1e19a1"} Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.612382 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" event={"ID":"013a2529-271b-4c1d-8ac4-3b443a9d1069","Type":"ContainerStarted","Data":"8ac91f9156eeb3f0dfea743643e9a0797c1faba60e5abc2060a96ed735b1342d"} Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.614895 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" event={"ID":"a9c9f6d2-b488-4184-b7dd-46e228737c64","Type":"ContainerStarted","Data":"cb3bbfb214cce7a23da6470b7bb17ec2ca94afb68530fa0f9b30f9e1a6f6573e"} Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.624895 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" event={"ID":"bbfc6317-5079-4f9f-83e0-9f93970a0710","Type":"ContainerStarted","Data":"99ae947678673f58f9c9256950fc89354fd293f68616f11db61934f8aef17779"} Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.628408 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" event={"ID":"688a9f5e-ab0d-4975-a033-a8cdf403fd9e","Type":"ContainerStarted","Data":"5328c3131cd573afbddc41ac6ba40e03b2aa75654b06500ae7c38320f5997c0f"} Jan 29 09:21:55 crc kubenswrapper[4771]: E0129 09:21:55.631255 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" podUID="688a9f5e-ab0d-4975-a033-a8cdf403fd9e" Jan 29 09:21:55 crc kubenswrapper[4771]: I0129 09:21:55.635084 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" event={"ID":"c594b46c-4d8f-4604-a70d-91544ff13805","Type":"ContainerStarted","Data":"652ecec0ce94de6bb929e7d3ebc24fa20a0398f742ef8e654ea1a80aa4344ca8"} Jan 29 09:21:56 crc kubenswrapper[4771]: I0129 09:21:56.472895 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:21:56 crc kubenswrapper[4771]: E0129 09:21:56.473055 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 09:21:56 crc kubenswrapper[4771]: E0129 09:21:56.473115 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert podName:be5b01ce-6d7f-40e9-9e6e-3291fab1d242 nodeName:}" failed. No retries permitted until 2026-01-29 09:22:00.473099641 +0000 UTC m=+940.595939868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert") pod "infra-operator-controller-manager-79955696d6-ltjpd" (UID: "be5b01ce-6d7f-40e9-9e6e-3291fab1d242") : secret "infra-operator-webhook-server-cert" not found Jan 29 09:21:56 crc kubenswrapper[4771]: E0129 09:21:56.673411 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" podUID="488821bb-04ee-4c62-b4a3-ac83d74a8919" Jan 29 09:21:56 crc kubenswrapper[4771]: E0129 09:21:56.673861 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" podUID="688a9f5e-ab0d-4975-a033-a8cdf403fd9e" Jan 29 09:21:56 crc kubenswrapper[4771]: E0129 09:21:56.674271 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" podUID="c7e92467-0347-42d4-9628-639368c69b80" Jan 29 09:21:56 crc kubenswrapper[4771]: E0129 09:21:56.674629 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" podUID="40f4ff01-59ff-4cb1-a683-6e1da9756691" Jan 29 09:21:56 crc kubenswrapper[4771]: I0129 09:21:56.879589 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lp6kd"] Jan 29 09:21:56 crc kubenswrapper[4771]: I0129 09:21:56.883389 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:56 crc kubenswrapper[4771]: I0129 09:21:56.897556 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp6kd"] Jan 29 09:21:56 crc kubenswrapper[4771]: I0129 09:21:56.981505 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-catalog-content\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:56 crc kubenswrapper[4771]: I0129 09:21:56.981673 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgr6\" (UniqueName: \"kubernetes.io/projected/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-kube-api-access-8lgr6\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:56 crc kubenswrapper[4771]: I0129 09:21:56.981753 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-utilities\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.084772 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-utilities\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.084835 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-catalog-content\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.084958 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lgr6\" (UniqueName: \"kubernetes.io/projected/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-kube-api-access-8lgr6\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.085916 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-utilities\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.086220 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-catalog-content\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.114526 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lgr6\" (UniqueName: \"kubernetes.io/projected/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-kube-api-access-8lgr6\") pod \"redhat-marketplace-lp6kd\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.210956 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.287514 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:21:57 crc kubenswrapper[4771]: E0129 09:21:57.287663 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:21:57 crc kubenswrapper[4771]: E0129 09:21:57.287732 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert podName:f5c86f6b-dae6-4551-9413-df4e429c0ffa nodeName:}" failed. No retries permitted until 2026-01-29 09:22:01.287714583 +0000 UTC m=+941.410554810 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" (UID: "f5c86f6b-dae6-4551-9413-df4e429c0ffa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.606762 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:57 crc kubenswrapper[4771]: I0129 09:21:57.606834 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:21:57 crc kubenswrapper[4771]: E0129 09:21:57.607023 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 09:21:57 crc kubenswrapper[4771]: E0129 09:21:57.607088 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:22:01.607069019 +0000 UTC m=+941.729909246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "metrics-server-cert" not found Jan 29 09:21:57 crc kubenswrapper[4771]: E0129 09:21:57.607477 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 09:21:57 crc kubenswrapper[4771]: E0129 09:21:57.607507 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:22:01.607497141 +0000 UTC m=+941.730337368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "webhook-server-cert" not found Jan 29 09:22:00 crc kubenswrapper[4771]: I0129 09:22:00.481124 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:22:00 crc kubenswrapper[4771]: E0129 09:22:00.481686 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 09:22:00 crc kubenswrapper[4771]: E0129 09:22:00.481894 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert podName:be5b01ce-6d7f-40e9-9e6e-3291fab1d242 nodeName:}" failed. No retries permitted until 2026-01-29 09:22:08.481878038 +0000 UTC m=+948.604718265 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert") pod "infra-operator-controller-manager-79955696d6-ltjpd" (UID: "be5b01ce-6d7f-40e9-9e6e-3291fab1d242") : secret "infra-operator-webhook-server-cert" not found Jan 29 09:22:01 crc kubenswrapper[4771]: I0129 09:22:01.294521 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:22:01 crc kubenswrapper[4771]: E0129 09:22:01.294746 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:22:01 crc kubenswrapper[4771]: E0129 09:22:01.294861 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert podName:f5c86f6b-dae6-4551-9413-df4e429c0ffa nodeName:}" failed. No retries permitted until 2026-01-29 09:22:09.294825875 +0000 UTC m=+949.417666102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" (UID: "f5c86f6b-dae6-4551-9413-df4e429c0ffa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:22:01 crc kubenswrapper[4771]: I0129 09:22:01.700220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:01 crc kubenswrapper[4771]: I0129 09:22:01.700382 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:01 crc kubenswrapper[4771]: E0129 09:22:01.700510 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 09:22:01 crc kubenswrapper[4771]: E0129 09:22:01.700659 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:22:09.700626101 +0000 UTC m=+949.823466328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "metrics-server-cert" not found Jan 29 09:22:01 crc kubenswrapper[4771]: E0129 09:22:01.700547 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 09:22:01 crc kubenswrapper[4771]: E0129 09:22:01.701313 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs podName:72d8430d-b468-4e7b-a568-bb12c9a4c856 nodeName:}" failed. No retries permitted until 2026-01-29 09:22:09.701271679 +0000 UTC m=+949.824112066 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs") pod "openstack-operator-controller-manager-6cf4dc6f96-vhpg8" (UID: "72d8430d-b468-4e7b-a568-bb12c9a4c856") : secret "webhook-server-cert" not found Jan 29 09:22:05 crc kubenswrapper[4771]: E0129 09:22:05.757429 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 29 09:22:05 crc kubenswrapper[4771]: E0129 09:22:05.758118 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5476,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-kzwjr_openstack-operators(b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:05 crc kubenswrapper[4771]: E0129 09:22:05.759431 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" podUID="b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e" Jan 29 09:22:06 crc kubenswrapper[4771]: E0129 09:22:06.437549 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 29 09:22:06 crc kubenswrapper[4771]: E0129 09:22:06.438199 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djgqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-rjn7t_openstack-operators(f9b6f2b9-26dd-44f5-859d-f9a1828d726d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:06 crc kubenswrapper[4771]: E0129 09:22:06.439386 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" podUID="f9b6f2b9-26dd-44f5-859d-f9a1828d726d" Jan 29 09:22:06 crc kubenswrapper[4771]: E0129 09:22:06.779007 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" podUID="f9b6f2b9-26dd-44f5-859d-f9a1828d726d" Jan 29 09:22:06 crc kubenswrapper[4771]: E0129 09:22:06.779048 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" podUID="b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e" Jan 29 09:22:08 crc kubenswrapper[4771]: E0129 09:22:08.125632 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 29 09:22:08 crc kubenswrapper[4771]: E0129 09:22:08.126196 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xfxb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-59xvx_openstack-operators(29710697-a286-413b-a7ce-01631b4cc6de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:08 crc kubenswrapper[4771]: E0129 09:22:08.127472 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" podUID="29710697-a286-413b-a7ce-01631b4cc6de" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.517135 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.541421 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be5b01ce-6d7f-40e9-9e6e-3291fab1d242-cert\") pod \"infra-operator-controller-manager-79955696d6-ltjpd\" (UID: \"be5b01ce-6d7f-40e9-9e6e-3291fab1d242\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.578549 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tglqc"] Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.581044 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.595590 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tglqc"] Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.605304 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.720609 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4krg\" (UniqueName: \"kubernetes.io/projected/e7b31489-fbef-4fd7-b789-8b20a2b565f1-kube-api-access-d4krg\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.720728 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-catalog-content\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.720953 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-utilities\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: E0129 09:22:08.791329 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" podUID="29710697-a286-413b-a7ce-01631b4cc6de" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.822906 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-catalog-content\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.822972 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-utilities\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.823056 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4krg\" (UniqueName: \"kubernetes.io/projected/e7b31489-fbef-4fd7-b789-8b20a2b565f1-kube-api-access-d4krg\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.823444 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-catalog-content\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.823628 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-utilities\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.850276 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4krg\" (UniqueName: \"kubernetes.io/projected/e7b31489-fbef-4fd7-b789-8b20a2b565f1-kube-api-access-d4krg\") pod \"certified-operators-tglqc\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:08 crc kubenswrapper[4771]: I0129 09:22:08.960622 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:09 crc kubenswrapper[4771]: E0129 09:22:09.106851 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 29 09:22:09 crc kubenswrapper[4771]: E0129 09:22:09.107051 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7ftp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-q74t2_openstack-operators(9ae26fe7-fcd5-4006-aa5d-133b8b91e521): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:09 crc kubenswrapper[4771]: E0129 09:22:09.108227 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" podUID="9ae26fe7-fcd5-4006-aa5d-133b8b91e521" Jan 29 09:22:09 crc kubenswrapper[4771]: I0129 09:22:09.333380 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:22:09 crc kubenswrapper[4771]: E0129 09:22:09.333574 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:22:09 crc kubenswrapper[4771]: E0129 09:22:09.333756 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert podName:f5c86f6b-dae6-4551-9413-df4e429c0ffa nodeName:}" failed. No retries permitted until 2026-01-29 09:22:25.333735105 +0000 UTC m=+965.456575332 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" (UID: "f5c86f6b-dae6-4551-9413-df4e429c0ffa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 09:22:09 crc kubenswrapper[4771]: I0129 09:22:09.740067 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:09 crc kubenswrapper[4771]: I0129 09:22:09.740217 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:09 crc kubenswrapper[4771]: I0129 09:22:09.748549 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-metrics-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:09 crc kubenswrapper[4771]: I0129 09:22:09.761799 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/72d8430d-b468-4e7b-a568-bb12c9a4c856-webhook-certs\") pod \"openstack-operator-controller-manager-6cf4dc6f96-vhpg8\" (UID: \"72d8430d-b468-4e7b-a568-bb12c9a4c856\") " pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:09 crc kubenswrapper[4771]: E0129 09:22:09.802851 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" podUID="9ae26fe7-fcd5-4006-aa5d-133b8b91e521" Jan 29 09:22:09 crc kubenswrapper[4771]: I0129 09:22:09.840737 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:10 crc kubenswrapper[4771]: E0129 09:22:10.611733 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 09:22:10 crc kubenswrapper[4771]: E0129 09:22:10.612132 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24xkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-w9c7v_openstack-operators(c594b46c-4d8f-4604-a70d-91544ff13805): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:10 crc kubenswrapper[4771]: E0129 09:22:10.613492 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" podUID="c594b46c-4d8f-4604-a70d-91544ff13805" Jan 29 09:22:10 crc kubenswrapper[4771]: E0129 09:22:10.805088 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" podUID="c594b46c-4d8f-4604-a70d-91544ff13805" Jan 29 09:22:10 crc kubenswrapper[4771]: I0129 09:22:10.971495 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5bpzb"] Jan 29 09:22:10 crc kubenswrapper[4771]: I0129 09:22:10.975414 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:10 crc kubenswrapper[4771]: I0129 09:22:10.989574 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bpzb"] Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.068241 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-utilities\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.068520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42d2n\" (UniqueName: \"kubernetes.io/projected/08b285b9-2a98-4751-a845-78ac87d4c3f1-kube-api-access-42d2n\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.068753 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-catalog-content\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.173164 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42d2n\" (UniqueName: \"kubernetes.io/projected/08b285b9-2a98-4751-a845-78ac87d4c3f1-kube-api-access-42d2n\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.173343 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-catalog-content\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.173394 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-utilities\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.174026 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-catalog-content\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.174083 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-utilities\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.198803 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42d2n\" (UniqueName: \"kubernetes.io/projected/08b285b9-2a98-4751-a845-78ac87d4c3f1-kube-api-access-42d2n\") pod \"community-operators-5bpzb\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: I0129 09:22:11.305206 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:11 crc kubenswrapper[4771]: E0129 09:22:11.957660 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 09:22:11 crc kubenswrapper[4771]: E0129 09:22:11.958436 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pz569,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-qpqh8_openstack-operators(013a2529-271b-4c1d-8ac4-3b443a9d1069): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:11 crc kubenswrapper[4771]: E0129 09:22:11.959576 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" podUID="013a2529-271b-4c1d-8ac4-3b443a9d1069" Jan 29 09:22:12 crc kubenswrapper[4771]: E0129 09:22:12.756083 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c" Jan 29 09:22:12 crc kubenswrapper[4771]: E0129 09:22:12.756383 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24nq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b6c4d8c5f-t96kk_openstack-operators(742db07e-b8fa-472a-824c-ce57c4e3bca5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:12 crc kubenswrapper[4771]: E0129 09:22:12.757558 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" podUID="742db07e-b8fa-472a-824c-ce57c4e3bca5" Jan 29 09:22:12 crc kubenswrapper[4771]: E0129 09:22:12.819542 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" podUID="742db07e-b8fa-472a-824c-ce57c4e3bca5" Jan 29 09:22:12 crc kubenswrapper[4771]: E0129 09:22:12.819598 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" podUID="013a2529-271b-4c1d-8ac4-3b443a9d1069" Jan 29 09:22:13 crc kubenswrapper[4771]: E0129 09:22:13.740988 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 29 09:22:13 crc kubenswrapper[4771]: E0129 09:22:13.741164 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7fbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-66qbk_openstack-operators(5048415a-36d8-47a9-aed1-f7395e309ce3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:13 crc kubenswrapper[4771]: E0129 09:22:13.742454 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" podUID="5048415a-36d8-47a9-aed1-f7395e309ce3" Jan 29 09:22:13 crc kubenswrapper[4771]: E0129 09:22:13.839247 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" podUID="5048415a-36d8-47a9-aed1-f7395e309ce3" Jan 29 09:22:14 crc kubenswrapper[4771]: I0129 09:22:14.339741 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp6kd"] Jan 29 09:22:14 crc kubenswrapper[4771]: E0129 09:22:14.698133 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 09:22:14 crc kubenswrapper[4771]: E0129 09:22:14.698356 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kddrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-wfjjb_openstack-operators(8373cf12-3567-409b-ae85-1f530e91c86a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:14 crc kubenswrapper[4771]: E0129 09:22:14.699549 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" podUID="8373cf12-3567-409b-ae85-1f530e91c86a" Jan 29 09:22:14 crc kubenswrapper[4771]: E0129 09:22:14.847184 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" podUID="8373cf12-3567-409b-ae85-1f530e91c86a" Jan 29 09:22:15 crc kubenswrapper[4771]: E0129 09:22:15.263185 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 29 09:22:15 crc kubenswrapper[4771]: E0129 09:22:15.263385 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvf9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-j8578_openstack-operators(bbfc6317-5079-4f9f-83e0-9f93970a0710): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:15 crc kubenswrapper[4771]: E0129 09:22:15.264636 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" podUID="bbfc6317-5079-4f9f-83e0-9f93970a0710" Jan 29 09:22:15 crc kubenswrapper[4771]: E0129 09:22:15.851561 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" podUID="bbfc6317-5079-4f9f-83e0-9f93970a0710" Jan 29 09:22:16 crc kubenswrapper[4771]: I0129 09:22:16.860024 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp6kd" event={"ID":"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee","Type":"ContainerStarted","Data":"3c7563403b8efc4e3b49760b2f2a58266223edfbbb9cbbcca28c03beca062a2f"} Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.388944 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd"] Jan 29 09:22:19 crc kubenswrapper[4771]: W0129 09:22:19.419207 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe5b01ce_6d7f_40e9_9e6e_3291fab1d242.slice/crio-da9955866357f2ef9e56210ffdeb010e3266ff7ecb9265be971023ce6901c1ee WatchSource:0}: Error finding container da9955866357f2ef9e56210ffdeb010e3266ff7ecb9265be971023ce6901c1ee: Status 404 returned error can't find the container with id da9955866357f2ef9e56210ffdeb010e3266ff7ecb9265be971023ce6901c1ee Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.459380 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tglqc"] Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.489364 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8"] Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.508518 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bpzb"] Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.901796 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" event={"ID":"6221aa48-bc7d-4a2f-9897-41dae47815e7","Type":"ContainerStarted","Data":"b36980fb10e7ad7442af64f6a6078bdc861c3ed1d5eb04445685dfdbf6c4f305"} Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.902673 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.919941 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" event={"ID":"688a9f5e-ab0d-4975-a033-a8cdf403fd9e","Type":"ContainerStarted","Data":"40c18680b98c3f1ebc5075aeef098db3c2255cc31fed3713aa9148a4013efda6"} Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.920238 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.928532 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" event={"ID":"7f152534-d323-4bdc-9d0e-86e673b65a56","Type":"ContainerStarted","Data":"c6c43218bff8afd6db77107c334dbf80167ac0604e39a03150055d2c9293eeb5"} Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.928872 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.942271 4771 generic.go:334] "Generic (PLEG): container finished" podID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerID="b9633a5c7d12eb5022da9d085264415d80c943836f18808b1c1eb42cbd6a91d1" exitCode=0 Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.942394 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp6kd" event={"ID":"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee","Type":"ContainerDied","Data":"b9633a5c7d12eb5022da9d085264415d80c943836f18808b1c1eb42cbd6a91d1"} Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.948328 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" podStartSLOduration=8.043127798 podStartE2EDuration="27.948289112s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:53.821488266 +0000 UTC m=+933.944328493" lastFinishedPulling="2026-01-29 09:22:13.72664958 +0000 UTC m=+953.849489807" observedRunningTime="2026-01-29 09:22:19.944314504 +0000 UTC m=+960.067154741" watchObservedRunningTime="2026-01-29 09:22:19.948289112 +0000 UTC m=+960.071129339" Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.978955 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" event={"ID":"c7e92467-0347-42d4-9628-639368c69b80","Type":"ContainerStarted","Data":"40712b4f06fbb123ede0a0b503431f772b77081db65dac1b7e3c3900a4d25f7b"} Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.979854 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.991574 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" event={"ID":"da6fb9cf-4fe9-41a8-a645-0d98a36e9472","Type":"ContainerStarted","Data":"3a517cb4cc11a0f13b8b5af63bc056166f2a887468067240c9861379d8e182e6"} Jan 29 09:22:19 crc kubenswrapper[4771]: I0129 09:22:19.992033 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.003062 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bpzb" event={"ID":"08b285b9-2a98-4751-a845-78ac87d4c3f1","Type":"ContainerStarted","Data":"8b58748f64024bd837d8594c390fd406a2c927703dc20d4d9c8d62f53f37eaef"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.016239 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" podStartSLOduration=4.274123854 podStartE2EDuration="28.016215531s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:55.13333325 +0000 UTC m=+935.256173477" lastFinishedPulling="2026-01-29 09:22:18.875424927 +0000 UTC m=+958.998265154" observedRunningTime="2026-01-29 09:22:20.001598885 +0000 UTC m=+960.124439112" watchObservedRunningTime="2026-01-29 09:22:20.016215531 +0000 UTC m=+960.139055758" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.018484 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" event={"ID":"488821bb-04ee-4c62-b4a3-ac83d74a8919","Type":"ContainerStarted","Data":"91a5eed7ebc2819909166b9db5310eeaa33e9ba0f52ff873bfee49bcda7565f9"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.019675 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.023465 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" event={"ID":"e0d87d52-0e91-4f3f-bbf2-228b57bbcff7","Type":"ContainerStarted","Data":"8d6a21e82296b4430709f3c13391f3b5753e96da4e9fa875ea213824cbb679c7"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.024459 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.030413 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" podStartSLOduration=3.60500545 podStartE2EDuration="28.030388195s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.346243002 +0000 UTC m=+934.469083229" lastFinishedPulling="2026-01-29 09:22:18.771625737 +0000 UTC m=+958.894465974" observedRunningTime="2026-01-29 09:22:20.029254654 +0000 UTC m=+960.152094881" watchObservedRunningTime="2026-01-29 09:22:20.030388195 +0000 UTC m=+960.153228422" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.050293 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" event={"ID":"a9c9f6d2-b488-4184-b7dd-46e228737c64","Type":"ContainerStarted","Data":"d295a5a20f0d1f5800677b3402da7a5276a0855ec8027dbc55d899c3cd4449cd"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.050339 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.058184 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" event={"ID":"40f4ff01-59ff-4cb1-a683-6e1da9756691","Type":"ContainerStarted","Data":"51d7c9573b8c522543fb0a0190fcba36701b2ea04fb96cae549d3511918c3637"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.069510 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" podStartSLOduration=3.68959913 podStartE2EDuration="28.069476843s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.362234575 +0000 UTC m=+934.485074802" lastFinishedPulling="2026-01-29 09:22:18.742112288 +0000 UTC m=+958.864952515" observedRunningTime="2026-01-29 09:22:20.067240342 +0000 UTC m=+960.190080569" watchObservedRunningTime="2026-01-29 09:22:20.069476843 +0000 UTC m=+960.192317070" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.069845 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" event={"ID":"be5b01ce-6d7f-40e9-9e6e-3291fab1d242","Type":"ContainerStarted","Data":"da9955866357f2ef9e56210ffdeb010e3266ff7ecb9265be971023ce6901c1ee"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.087438 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" event={"ID":"5c0452ac-093e-45b1-825f-3ba01ed93425","Type":"ContainerStarted","Data":"44f906d2fc778d049668c9a65631fa119090ad6182ff59821a76f8bf158eb925"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.091473 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.104707 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" podStartSLOduration=3.226017432 podStartE2EDuration="27.104672776s" podCreationTimestamp="2026-01-29 09:21:53 +0000 UTC" firstStartedPulling="2026-01-29 09:21:55.014320128 +0000 UTC m=+935.137160355" lastFinishedPulling="2026-01-29 09:22:18.892975472 +0000 UTC m=+959.015815699" observedRunningTime="2026-01-29 09:22:20.102183318 +0000 UTC m=+960.225023555" watchObservedRunningTime="2026-01-29 09:22:20.104672776 +0000 UTC m=+960.227513013" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.106804 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" event={"ID":"72d8430d-b468-4e7b-a568-bb12c9a4c856","Type":"ContainerStarted","Data":"bfadbbbc2b30c61300fdfb67170b2dce0661f9c03f3b394405b4d3249f9c449d"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.106856 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" event={"ID":"72d8430d-b468-4e7b-a568-bb12c9a4c856","Type":"ContainerStarted","Data":"dea4f46ca97c2a14bd32a421f33ae5a66ac782df2e39e1d077fc3d0ce8e2032d"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.107627 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.122094 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tglqc" event={"ID":"e7b31489-fbef-4fd7-b789-8b20a2b565f1","Type":"ContainerStarted","Data":"4dc8de642de1e155361695a0cf3f2c8385c228487fc0a627de28c8d23248b04b"} Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.144515 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" podStartSLOduration=4.008417121 podStartE2EDuration="28.144479503s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.606124318 +0000 UTC m=+934.728964545" lastFinishedPulling="2026-01-29 09:22:18.7421867 +0000 UTC m=+958.865026927" observedRunningTime="2026-01-29 09:22:20.13659606 +0000 UTC m=+960.259436297" watchObservedRunningTime="2026-01-29 09:22:20.144479503 +0000 UTC m=+960.267319730" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.172133 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" podStartSLOduration=4.317874218 podStartE2EDuration="28.172105541s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:55.018740068 +0000 UTC m=+935.141580295" lastFinishedPulling="2026-01-29 09:22:18.872971391 +0000 UTC m=+958.995811618" observedRunningTime="2026-01-29 09:22:20.163955291 +0000 UTC m=+960.286795538" watchObservedRunningTime="2026-01-29 09:22:20.172105541 +0000 UTC m=+960.294945768" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.184436 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9gz6m" podStartSLOduration=3.455758811 podStartE2EDuration="27.184416585s" podCreationTimestamp="2026-01-29 09:21:53 +0000 UTC" firstStartedPulling="2026-01-29 09:21:55.144341738 +0000 UTC m=+935.267181965" lastFinishedPulling="2026-01-29 09:22:18.872999512 +0000 UTC m=+958.995839739" observedRunningTime="2026-01-29 09:22:20.180398026 +0000 UTC m=+960.303238253" watchObservedRunningTime="2026-01-29 09:22:20.184416585 +0000 UTC m=+960.307256812" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.232516 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" podStartSLOduration=5.771054888 podStartE2EDuration="28.232497636s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.990596376 +0000 UTC m=+935.113436603" lastFinishedPulling="2026-01-29 09:22:17.452039124 +0000 UTC m=+957.574879351" observedRunningTime="2026-01-29 09:22:20.231841069 +0000 UTC m=+960.354681296" watchObservedRunningTime="2026-01-29 09:22:20.232497636 +0000 UTC m=+960.355337863" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.264142 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" podStartSLOduration=27.264119522 podStartE2EDuration="27.264119522s" podCreationTimestamp="2026-01-29 09:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:22:20.260831193 +0000 UTC m=+960.383671420" watchObservedRunningTime="2026-01-29 09:22:20.264119522 +0000 UTC m=+960.386959759" Jan 29 09:22:20 crc kubenswrapper[4771]: I0129 09:22:20.298163 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" podStartSLOduration=4.13582379 podStartE2EDuration="28.298142653s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.608824781 +0000 UTC m=+934.731665008" lastFinishedPulling="2026-01-29 09:22:18.771143644 +0000 UTC m=+958.893983871" observedRunningTime="2026-01-29 09:22:20.295185203 +0000 UTC m=+960.418025430" watchObservedRunningTime="2026-01-29 09:22:20.298142653 +0000 UTC m=+960.420982880" Jan 29 09:22:21 crc kubenswrapper[4771]: I0129 09:22:21.139406 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" event={"ID":"b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e","Type":"ContainerStarted","Data":"faa531466d992c828dda31690c2069adb693cb3744fad32c8a24b6acc83aa679"} Jan 29 09:22:21 crc kubenswrapper[4771]: I0129 09:22:21.140999 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" Jan 29 09:22:21 crc kubenswrapper[4771]: I0129 09:22:21.145355 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp6kd" event={"ID":"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee","Type":"ContainerStarted","Data":"ad59e59162be2d762825ab64901e5ae97dedf258fa1d7d4006ca24837ddad4aa"} Jan 29 09:22:21 crc kubenswrapper[4771]: I0129 09:22:21.147449 4771 generic.go:334] "Generic (PLEG): container finished" podID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerID="6f404b11b3a007f9bb56a7fba156b99c3fe8842fd49cb962783fadf0f8c45b91" exitCode=0 Jan 29 09:22:21 crc kubenswrapper[4771]: I0129 09:22:21.147509 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tglqc" event={"ID":"e7b31489-fbef-4fd7-b789-8b20a2b565f1","Type":"ContainerDied","Data":"6f404b11b3a007f9bb56a7fba156b99c3fe8842fd49cb962783fadf0f8c45b91"} Jan 29 09:22:21 crc kubenswrapper[4771]: I0129 09:22:21.148804 4771 generic.go:334] "Generic (PLEG): container finished" podID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerID="2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b" exitCode=0 Jan 29 09:22:21 crc kubenswrapper[4771]: I0129 09:22:21.149811 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bpzb" event={"ID":"08b285b9-2a98-4751-a845-78ac87d4c3f1","Type":"ContainerDied","Data":"2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b"} Jan 29 09:22:21 crc kubenswrapper[4771]: I0129 09:22:21.167190 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" podStartSLOduration=2.727676659 podStartE2EDuration="29.16716392s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:53.897067542 +0000 UTC m=+934.019907769" lastFinishedPulling="2026-01-29 09:22:20.336554803 +0000 UTC m=+960.459395030" observedRunningTime="2026-01-29 09:22:21.161002223 +0000 UTC m=+961.283842450" watchObservedRunningTime="2026-01-29 09:22:21.16716392 +0000 UTC m=+961.290004147" Jan 29 09:22:22 crc kubenswrapper[4771]: I0129 09:22:22.156361 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" event={"ID":"9ae26fe7-fcd5-4006-aa5d-133b8b91e521","Type":"ContainerStarted","Data":"aff8d6fe561f8abbbf0620e3881992cedaccd8bb8a8a0c43209c4abdc7c2b5b5"} Jan 29 09:22:22 crc kubenswrapper[4771]: I0129 09:22:22.157142 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" Jan 29 09:22:22 crc kubenswrapper[4771]: I0129 09:22:22.158103 4771 generic.go:334] "Generic (PLEG): container finished" podID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerID="b7688842b3334da9fabe727f76de98ed50aaf9a01756be1c8676e85142e964ab" exitCode=0 Jan 29 09:22:22 crc kubenswrapper[4771]: I0129 09:22:22.158135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tglqc" event={"ID":"e7b31489-fbef-4fd7-b789-8b20a2b565f1","Type":"ContainerDied","Data":"b7688842b3334da9fabe727f76de98ed50aaf9a01756be1c8676e85142e964ab"} Jan 29 09:22:22 crc kubenswrapper[4771]: I0129 09:22:22.159965 4771 generic.go:334] "Generic (PLEG): container finished" podID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerID="ad59e59162be2d762825ab64901e5ae97dedf258fa1d7d4006ca24837ddad4aa" exitCode=0 Jan 29 09:22:22 crc kubenswrapper[4771]: I0129 09:22:22.160128 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp6kd" event={"ID":"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee","Type":"ContainerDied","Data":"ad59e59162be2d762825ab64901e5ae97dedf258fa1d7d4006ca24837ddad4aa"} Jan 29 09:22:22 crc kubenswrapper[4771]: I0129 09:22:22.179140 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" podStartSLOduration=3.681735826 podStartE2EDuration="30.179106424s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.846148806 +0000 UTC m=+934.968989033" lastFinishedPulling="2026-01-29 09:22:21.343519404 +0000 UTC m=+961.466359631" observedRunningTime="2026-01-29 09:22:22.178665452 +0000 UTC m=+962.301505679" watchObservedRunningTime="2026-01-29 09:22:22.179106424 +0000 UTC m=+962.301946651" Jan 29 09:22:24 crc kubenswrapper[4771]: I0129 09:22:24.042006 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pkgdd" Jan 29 09:22:25 crc kubenswrapper[4771]: I0129 09:22:25.429157 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:22:25 crc kubenswrapper[4771]: I0129 09:22:25.450756 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c86f6b-dae6-4551-9413-df4e429c0ffa-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k\" (UID: \"f5c86f6b-dae6-4551-9413-df4e429c0ffa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:22:25 crc kubenswrapper[4771]: I0129 09:22:25.640287 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vbmkt" Jan 29 09:22:25 crc kubenswrapper[4771]: I0129 09:22:25.648021 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.096082 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k"] Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.229068 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" event={"ID":"f5c86f6b-dae6-4551-9413-df4e429c0ffa","Type":"ContainerStarted","Data":"bc75e400ed385c42fe63341e9869aee67858462a7dc084ea0007773b10356ce1"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.235822 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tglqc" event={"ID":"e7b31489-fbef-4fd7-b789-8b20a2b565f1","Type":"ContainerStarted","Data":"ffc6554286bb8a1e3dce86ff37d49894b626a69167f6d823f07631fac2372cbd"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.238226 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" event={"ID":"29710697-a286-413b-a7ce-01631b4cc6de","Type":"ContainerStarted","Data":"dabbfb3b1a03037f2f937f23f7e4260c678a262832f2af6149a911a9de6a7872"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.238898 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.258679 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" event={"ID":"c594b46c-4d8f-4604-a70d-91544ff13805","Type":"ContainerStarted","Data":"ddfeb2ef9c30f24b4444698575c991f9675989d306afed1b66bc9b9dac1e559f"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.260118 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.274597 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" event={"ID":"be5b01ce-6d7f-40e9-9e6e-3291fab1d242","Type":"ContainerStarted","Data":"cbbbdb6ece0ceb6b300cb6fc027a277f04194a46621a3dc864773cd1efc9f2b1"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.275931 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.289991 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tglqc" podStartSLOduration=13.220365919 podStartE2EDuration="18.289964494s" podCreationTimestamp="2026-01-29 09:22:08 +0000 UTC" firstStartedPulling="2026-01-29 09:22:20.124155813 +0000 UTC m=+960.246996040" lastFinishedPulling="2026-01-29 09:22:25.193754388 +0000 UTC m=+965.316594615" observedRunningTime="2026-01-29 09:22:26.28060854 +0000 UTC m=+966.403448777" watchObservedRunningTime="2026-01-29 09:22:26.289964494 +0000 UTC m=+966.412804721" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.294103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" event={"ID":"f9b6f2b9-26dd-44f5-859d-f9a1828d726d","Type":"ContainerStarted","Data":"378e6801b9236fb963974f390834ae5c5777c3eba63cd5116db62918f0a4952a"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.296077 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.311306 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" podStartSLOduration=28.533962847 podStartE2EDuration="34.311282581s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:22:19.426014393 +0000 UTC m=+959.548854620" lastFinishedPulling="2026-01-29 09:22:25.203334127 +0000 UTC m=+965.326174354" observedRunningTime="2026-01-29 09:22:26.30865513 +0000 UTC m=+966.431495357" watchObservedRunningTime="2026-01-29 09:22:26.311282581 +0000 UTC m=+966.434122808" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.319223 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" event={"ID":"013a2529-271b-4c1d-8ac4-3b443a9d1069","Type":"ContainerStarted","Data":"2ba029b66560532577a9ada5b83c49d96bb96594119244446a75c302fd184468"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.319879 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.341204 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp6kd" event={"ID":"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee","Type":"ContainerStarted","Data":"551fab5ba086e0561a53c449e0ca5f0fec7ac809197a02924ba3a662663aac9f"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.352865 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" podStartSLOduration=4.143907078 podStartE2EDuration="34.352844366s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.984917692 +0000 UTC m=+935.107757919" lastFinishedPulling="2026-01-29 09:22:25.19385498 +0000 UTC m=+965.316695207" observedRunningTime="2026-01-29 09:22:26.352216179 +0000 UTC m=+966.475056416" watchObservedRunningTime="2026-01-29 09:22:26.352844366 +0000 UTC m=+966.475684593" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.364100 4771 generic.go:334] "Generic (PLEG): container finished" podID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerID="739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495" exitCode=0 Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.364167 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bpzb" event={"ID":"08b285b9-2a98-4751-a845-78ac87d4c3f1","Type":"ContainerDied","Data":"739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495"} Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.383978 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" podStartSLOduration=3.371485556 podStartE2EDuration="34.383958318s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.209652654 +0000 UTC m=+934.332492881" lastFinishedPulling="2026-01-29 09:22:25.222125416 +0000 UTC m=+965.344965643" observedRunningTime="2026-01-29 09:22:26.382214231 +0000 UTC m=+966.505054458" watchObservedRunningTime="2026-01-29 09:22:26.383958318 +0000 UTC m=+966.506798535" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.439751 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lp6kd" podStartSLOduration=25.182573486 podStartE2EDuration="30.439721498s" podCreationTimestamp="2026-01-29 09:21:56 +0000 UTC" firstStartedPulling="2026-01-29 09:22:19.944628303 +0000 UTC m=+960.067468530" lastFinishedPulling="2026-01-29 09:22:25.201776315 +0000 UTC m=+965.324616542" observedRunningTime="2026-01-29 09:22:26.434069085 +0000 UTC m=+966.556909322" watchObservedRunningTime="2026-01-29 09:22:26.439721498 +0000 UTC m=+966.562561725" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.496949 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" podStartSLOduration=3.719524769 podStartE2EDuration="34.496913396s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.618070191 +0000 UTC m=+934.740910418" lastFinishedPulling="2026-01-29 09:22:25.395458818 +0000 UTC m=+965.518299045" observedRunningTime="2026-01-29 09:22:26.485404685 +0000 UTC m=+966.608244912" watchObservedRunningTime="2026-01-29 09:22:26.496913396 +0000 UTC m=+966.619753623" Jan 29 09:22:26 crc kubenswrapper[4771]: I0129 09:22:26.556818 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" podStartSLOduration=3.552340862 podStartE2EDuration="34.556796217s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.190857345 +0000 UTC m=+934.313697582" lastFinishedPulling="2026-01-29 09:22:25.19531271 +0000 UTC m=+965.318152937" observedRunningTime="2026-01-29 09:22:26.549199402 +0000 UTC m=+966.672039629" watchObservedRunningTime="2026-01-29 09:22:26.556796217 +0000 UTC m=+966.679636444" Jan 29 09:22:27 crc kubenswrapper[4771]: I0129 09:22:27.212019 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:22:27 crc kubenswrapper[4771]: I0129 09:22:27.212219 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:22:27 crc kubenswrapper[4771]: I0129 09:22:27.388246 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" event={"ID":"5048415a-36d8-47a9-aed1-f7395e309ce3","Type":"ContainerStarted","Data":"1631544468e416e6f33f18d71a424973e85c28b3b246189cfb146f18910a6a26"} Jan 29 09:22:27 crc kubenswrapper[4771]: I0129 09:22:27.388485 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" Jan 29 09:22:27 crc kubenswrapper[4771]: I0129 09:22:27.395642 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" event={"ID":"742db07e-b8fa-472a-824c-ce57c4e3bca5","Type":"ContainerStarted","Data":"2b063c17ee70b502c85486d70cd3b82149ce9e51dd64257ecd1434541bb3daf6"} Jan 29 09:22:27 crc kubenswrapper[4771]: I0129 09:22:27.451205 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" podStartSLOduration=3.203545523 podStartE2EDuration="34.45117878s" podCreationTimestamp="2026-01-29 09:21:53 +0000 UTC" firstStartedPulling="2026-01-29 09:21:55.131745577 +0000 UTC m=+935.254585804" lastFinishedPulling="2026-01-29 09:22:26.379378834 +0000 UTC m=+966.502219061" observedRunningTime="2026-01-29 09:22:27.418224558 +0000 UTC m=+967.541064785" watchObservedRunningTime="2026-01-29 09:22:27.45117878 +0000 UTC m=+967.574019007" Jan 29 09:22:27 crc kubenswrapper[4771]: I0129 09:22:27.858612 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" podStartSLOduration=3.9839053570000003 podStartE2EDuration="35.858579199s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.501009922 +0000 UTC m=+934.623850149" lastFinishedPulling="2026-01-29 09:22:26.375683764 +0000 UTC m=+966.498523991" observedRunningTime="2026-01-29 09:22:27.448638251 +0000 UTC m=+967.571478488" watchObservedRunningTime="2026-01-29 09:22:27.858579199 +0000 UTC m=+967.981419426" Jan 29 09:22:28 crc kubenswrapper[4771]: I0129 09:22:28.274885 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lp6kd" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="registry-server" probeResult="failure" output=< Jan 29 09:22:28 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:22:28 crc kubenswrapper[4771]: > Jan 29 09:22:28 crc kubenswrapper[4771]: I0129 09:22:28.404397 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bpzb" event={"ID":"08b285b9-2a98-4751-a845-78ac87d4c3f1","Type":"ContainerStarted","Data":"e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2"} Jan 29 09:22:28 crc kubenswrapper[4771]: I0129 09:22:28.406563 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" event={"ID":"8373cf12-3567-409b-ae85-1f530e91c86a","Type":"ContainerStarted","Data":"12c7a3dd0a0cbbcde25aec4e1cee11dffe13b3d83d68be25760987c224ff53b6"} Jan 29 09:22:28 crc kubenswrapper[4771]: I0129 09:22:28.406896 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" Jan 29 09:22:28 crc kubenswrapper[4771]: I0129 09:22:28.432966 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5bpzb" podStartSLOduration=12.705162167 podStartE2EDuration="18.432945099s" podCreationTimestamp="2026-01-29 09:22:10 +0000 UTC" firstStartedPulling="2026-01-29 09:22:21.153479699 +0000 UTC m=+961.276319936" lastFinishedPulling="2026-01-29 09:22:26.881262641 +0000 UTC m=+967.004102868" observedRunningTime="2026-01-29 09:22:28.427550823 +0000 UTC m=+968.550391050" watchObservedRunningTime="2026-01-29 09:22:28.432945099 +0000 UTC m=+968.555785336" Jan 29 09:22:28 crc kubenswrapper[4771]: I0129 09:22:28.451613 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" podStartSLOduration=3.353629894 podStartE2EDuration="36.451575653s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.413976786 +0000 UTC m=+934.536817023" lastFinishedPulling="2026-01-29 09:22:27.511922555 +0000 UTC m=+967.634762782" observedRunningTime="2026-01-29 09:22:28.445982122 +0000 UTC m=+968.568822349" watchObservedRunningTime="2026-01-29 09:22:28.451575653 +0000 UTC m=+968.574415880" Jan 29 09:22:28 crc kubenswrapper[4771]: I0129 09:22:28.961570 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:28 crc kubenswrapper[4771]: I0129 09:22:28.961632 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:29 crc kubenswrapper[4771]: I0129 09:22:29.036454 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:29 crc kubenswrapper[4771]: I0129 09:22:29.846680 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6cf4dc6f96-vhpg8" Jan 29 09:22:31 crc kubenswrapper[4771]: I0129 09:22:31.305412 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:31 crc kubenswrapper[4771]: I0129 09:22:31.305858 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:31 crc kubenswrapper[4771]: I0129 09:22:31.360618 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:32 crc kubenswrapper[4771]: I0129 09:22:32.790315 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w5l7q" Jan 29 09:22:32 crc kubenswrapper[4771]: I0129 09:22:32.808461 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kzwjr" Jan 29 09:22:32 crc kubenswrapper[4771]: I0129 09:22:32.816586 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" Jan 29 09:22:32 crc kubenswrapper[4771]: I0129 09:22:32.820004 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-t96kk" Jan 29 09:22:32 crc kubenswrapper[4771]: I0129 09:22:32.929385 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-59xvx" Jan 29 09:22:32 crc kubenswrapper[4771]: I0129 09:22:32.973386 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xczpv" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.098414 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-wfjjb" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.221972 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-rjn7t" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.251535 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2cq8m" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.300375 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c298s" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.549093 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-j2vm7" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.612675 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qpqh8" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.789134 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-q74t2" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.863676 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkl7v" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.915039 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-w9c7v" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.967932 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-p7tj6" Jan 29 09:22:33 crc kubenswrapper[4771]: I0129 09:22:33.997377 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-46m5f" Jan 29 09:22:34 crc kubenswrapper[4771]: I0129 09:22:34.231474 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-66qbk" Jan 29 09:22:37 crc kubenswrapper[4771]: I0129 09:22:37.255064 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:22:37 crc kubenswrapper[4771]: I0129 09:22:37.304359 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:22:37 crc kubenswrapper[4771]: I0129 09:22:37.488908 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp6kd"] Jan 29 09:22:38 crc kubenswrapper[4771]: I0129 09:22:38.481318 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lp6kd" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="registry-server" containerID="cri-o://551fab5ba086e0561a53c449e0ca5f0fec7ac809197a02924ba3a662663aac9f" gracePeriod=2 Jan 29 09:22:38 crc kubenswrapper[4771]: I0129 09:22:38.612637 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ltjpd" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.026808 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:39 crc kubenswrapper[4771]: E0129 09:22:39.220218 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:89f6fd332fabefd2fff5619432986b37c1c6d197dd1c510f21dfe4609939b8a6" Jan 29 09:22:39 crc kubenswrapper[4771]: E0129 09:22:39.220806 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:89f6fd332fabefd2fff5619432986b37c1c6d197dd1c510f21dfe4609939b8a6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6x8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k_openstack-operators(f5c86f6b-dae6-4551-9413-df4e429c0ffa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:22:39 crc kubenswrapper[4771]: E0129 09:22:39.222347 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" podUID="f5c86f6b-dae6-4551-9413-df4e429c0ffa" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.490750 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp6kd" event={"ID":"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee","Type":"ContainerDied","Data":"551fab5ba086e0561a53c449e0ca5f0fec7ac809197a02924ba3a662663aac9f"} Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.491791 4771 generic.go:334] "Generic (PLEG): container finished" podID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerID="551fab5ba086e0561a53c449e0ca5f0fec7ac809197a02924ba3a662663aac9f" exitCode=0 Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.491889 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lp6kd" event={"ID":"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee","Type":"ContainerDied","Data":"3c7563403b8efc4e3b49760b2f2a58266223edfbbb9cbbcca28c03beca062a2f"} Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.491915 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c7563403b8efc4e3b49760b2f2a58266223edfbbb9cbbcca28c03beca062a2f" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.494237 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" event={"ID":"bbfc6317-5079-4f9f-83e0-9f93970a0710","Type":"ContainerStarted","Data":"2d4af973a136ef189175be84ba84ac110a5e1d37a52dc40cfcfe3f5a0218f7cc"} Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.494429 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" Jan 29 09:22:39 crc kubenswrapper[4771]: E0129 09:22:39.496916 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:89f6fd332fabefd2fff5619432986b37c1c6d197dd1c510f21dfe4609939b8a6\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" podUID="f5c86f6b-dae6-4551-9413-df4e429c0ffa" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.540482 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" podStartSLOduration=3.1716965249999998 podStartE2EDuration="47.540452842s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:21:54.850611516 +0000 UTC m=+934.973451744" lastFinishedPulling="2026-01-29 09:22:39.219367834 +0000 UTC m=+979.342208061" observedRunningTime="2026-01-29 09:22:39.53817293 +0000 UTC m=+979.661013157" watchObservedRunningTime="2026-01-29 09:22:39.540452842 +0000 UTC m=+979.663293069" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.552944 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.698645 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-catalog-content\") pod \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.698834 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-utilities\") pod \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.698860 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lgr6\" (UniqueName: \"kubernetes.io/projected/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-kube-api-access-8lgr6\") pod \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\" (UID: \"f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee\") " Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.704647 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-utilities" (OuterVolumeSpecName: "utilities") pod "f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" (UID: "f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.707188 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-kube-api-access-8lgr6" (OuterVolumeSpecName: "kube-api-access-8lgr6") pod "f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" (UID: "f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee"). InnerVolumeSpecName "kube-api-access-8lgr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.730737 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" (UID: "f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.802609 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.802647 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lgr6\" (UniqueName: \"kubernetes.io/projected/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-kube-api-access-8lgr6\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:39 crc kubenswrapper[4771]: I0129 09:22:39.802659 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.091190 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tglqc"] Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.091929 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tglqc" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerName="registry-server" containerID="cri-o://ffc6554286bb8a1e3dce86ff37d49894b626a69167f6d823f07631fac2372cbd" gracePeriod=2 Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.503125 4771 generic.go:334] "Generic (PLEG): container finished" podID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerID="ffc6554286bb8a1e3dce86ff37d49894b626a69167f6d823f07631fac2372cbd" exitCode=0 Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.503216 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lp6kd" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.503200 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tglqc" event={"ID":"e7b31489-fbef-4fd7-b789-8b20a2b565f1","Type":"ContainerDied","Data":"ffc6554286bb8a1e3dce86ff37d49894b626a69167f6d823f07631fac2372cbd"} Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.503272 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tglqc" event={"ID":"e7b31489-fbef-4fd7-b789-8b20a2b565f1","Type":"ContainerDied","Data":"4dc8de642de1e155361695a0cf3f2c8385c228487fc0a627de28c8d23248b04b"} Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.503286 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dc8de642de1e155361695a0cf3f2c8385c228487fc0a627de28c8d23248b04b" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.528775 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.541591 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp6kd"] Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.549025 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lp6kd"] Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.629771 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4krg\" (UniqueName: \"kubernetes.io/projected/e7b31489-fbef-4fd7-b789-8b20a2b565f1-kube-api-access-d4krg\") pod \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.629833 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-utilities\") pod \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.630132 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-catalog-content\") pod \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\" (UID: \"e7b31489-fbef-4fd7-b789-8b20a2b565f1\") " Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.631046 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-utilities" (OuterVolumeSpecName: "utilities") pod "e7b31489-fbef-4fd7-b789-8b20a2b565f1" (UID: "e7b31489-fbef-4fd7-b789-8b20a2b565f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.635499 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b31489-fbef-4fd7-b789-8b20a2b565f1-kube-api-access-d4krg" (OuterVolumeSpecName: "kube-api-access-d4krg") pod "e7b31489-fbef-4fd7-b789-8b20a2b565f1" (UID: "e7b31489-fbef-4fd7-b789-8b20a2b565f1"). InnerVolumeSpecName "kube-api-access-d4krg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.680500 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7b31489-fbef-4fd7-b789-8b20a2b565f1" (UID: "e7b31489-fbef-4fd7-b789-8b20a2b565f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.731542 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.731743 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4krg\" (UniqueName: \"kubernetes.io/projected/e7b31489-fbef-4fd7-b789-8b20a2b565f1-kube-api-access-d4krg\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.731762 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b31489-fbef-4fd7-b789-8b20a2b565f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:40 crc kubenswrapper[4771]: I0129 09:22:40.848012 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" path="/var/lib/kubelet/pods/f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee/volumes" Jan 29 09:22:41 crc kubenswrapper[4771]: I0129 09:22:41.373988 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:41 crc kubenswrapper[4771]: I0129 09:22:41.517888 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tglqc" Jan 29 09:22:41 crc kubenswrapper[4771]: I0129 09:22:41.542547 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tglqc"] Jan 29 09:22:41 crc kubenswrapper[4771]: I0129 09:22:41.552284 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tglqc"] Jan 29 09:22:42 crc kubenswrapper[4771]: I0129 09:22:42.848803 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" path="/var/lib/kubelet/pods/e7b31489-fbef-4fd7-b789-8b20a2b565f1/volumes" Jan 29 09:22:43 crc kubenswrapper[4771]: I0129 09:22:43.888003 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bpzb"] Jan 29 09:22:43 crc kubenswrapper[4771]: I0129 09:22:43.888324 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5bpzb" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerName="registry-server" containerID="cri-o://e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2" gracePeriod=2 Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.271896 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.272333 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.286192 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.387022 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-utilities\") pod \"08b285b9-2a98-4751-a845-78ac87d4c3f1\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.387103 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-catalog-content\") pod \"08b285b9-2a98-4751-a845-78ac87d4c3f1\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.387138 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42d2n\" (UniqueName: \"kubernetes.io/projected/08b285b9-2a98-4751-a845-78ac87d4c3f1-kube-api-access-42d2n\") pod \"08b285b9-2a98-4751-a845-78ac87d4c3f1\" (UID: \"08b285b9-2a98-4751-a845-78ac87d4c3f1\") " Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.388110 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-utilities" (OuterVolumeSpecName: "utilities") pod "08b285b9-2a98-4751-a845-78ac87d4c3f1" (UID: "08b285b9-2a98-4751-a845-78ac87d4c3f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.397069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b285b9-2a98-4751-a845-78ac87d4c3f1-kube-api-access-42d2n" (OuterVolumeSpecName: "kube-api-access-42d2n") pod "08b285b9-2a98-4751-a845-78ac87d4c3f1" (UID: "08b285b9-2a98-4751-a845-78ac87d4c3f1"). InnerVolumeSpecName "kube-api-access-42d2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.457458 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08b285b9-2a98-4751-a845-78ac87d4c3f1" (UID: "08b285b9-2a98-4751-a845-78ac87d4c3f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.488536 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42d2n\" (UniqueName: \"kubernetes.io/projected/08b285b9-2a98-4751-a845-78ac87d4c3f1-kube-api-access-42d2n\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.488592 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.488604 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b285b9-2a98-4751-a845-78ac87d4c3f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.542778 4771 generic.go:334] "Generic (PLEG): container finished" podID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerID="e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2" exitCode=0 Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.542797 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bpzb" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.542818 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bpzb" event={"ID":"08b285b9-2a98-4751-a845-78ac87d4c3f1","Type":"ContainerDied","Data":"e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2"} Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.543352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bpzb" event={"ID":"08b285b9-2a98-4751-a845-78ac87d4c3f1","Type":"ContainerDied","Data":"8b58748f64024bd837d8594c390fd406a2c927703dc20d4d9c8d62f53f37eaef"} Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.543377 4771 scope.go:117] "RemoveContainer" containerID="e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.577919 4771 scope.go:117] "RemoveContainer" containerID="739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.584912 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bpzb"] Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.591406 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5bpzb"] Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.607452 4771 scope.go:117] "RemoveContainer" containerID="2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.636392 4771 scope.go:117] "RemoveContainer" containerID="e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2" Jan 29 09:22:44 crc kubenswrapper[4771]: E0129 09:22:44.636977 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2\": container with ID starting with e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2 not found: ID does not exist" containerID="e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.637041 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2"} err="failed to get container status \"e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2\": rpc error: code = NotFound desc = could not find container \"e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2\": container with ID starting with e23e165f2af062a8116b724189ab73faad6c767031dc25bebd8a09b1a22598b2 not found: ID does not exist" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.637075 4771 scope.go:117] "RemoveContainer" containerID="739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495" Jan 29 09:22:44 crc kubenswrapper[4771]: E0129 09:22:44.637415 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495\": container with ID starting with 739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495 not found: ID does not exist" containerID="739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.637461 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495"} err="failed to get container status \"739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495\": rpc error: code = NotFound desc = could not find container \"739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495\": container with ID starting with 739932bf3fed69ff62b8067df426f5ed1dedfcb944384e451dc1f40d68e12495 not found: ID does not exist" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.637490 4771 scope.go:117] "RemoveContainer" containerID="2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b" Jan 29 09:22:44 crc kubenswrapper[4771]: E0129 09:22:44.637950 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b\": container with ID starting with 2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b not found: ID does not exist" containerID="2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.638001 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b"} err="failed to get container status \"2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b\": rpc error: code = NotFound desc = could not find container \"2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b\": container with ID starting with 2c21401232ae021cd5e01b676834dc43384e5cf33efe8bb719d189c90684d23b not found: ID does not exist" Jan 29 09:22:44 crc kubenswrapper[4771]: I0129 09:22:44.846394 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" path="/var/lib/kubelet/pods/08b285b9-2a98-4751-a845-78ac87d4c3f1/volumes" Jan 29 09:22:51 crc kubenswrapper[4771]: I0129 09:22:51.839776 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:22:52 crc kubenswrapper[4771]: I0129 09:22:52.615403 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" event={"ID":"f5c86f6b-dae6-4551-9413-df4e429c0ffa","Type":"ContainerStarted","Data":"b17df5b16a80e3113956faf1ba749458a40a405c4260312cbc09d67e93618ce8"} Jan 29 09:22:52 crc kubenswrapper[4771]: I0129 09:22:52.616237 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:22:52 crc kubenswrapper[4771]: I0129 09:22:52.647043 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" podStartSLOduration=34.417497077 podStartE2EDuration="1m0.64702131s" podCreationTimestamp="2026-01-29 09:21:52 +0000 UTC" firstStartedPulling="2026-01-29 09:22:26.11880921 +0000 UTC m=+966.241649437" lastFinishedPulling="2026-01-29 09:22:52.348333433 +0000 UTC m=+992.471173670" observedRunningTime="2026-01-29 09:22:52.640993526 +0000 UTC m=+992.763833753" watchObservedRunningTime="2026-01-29 09:22:52.64702131 +0000 UTC m=+992.769861547" Jan 29 09:22:53 crc kubenswrapper[4771]: I0129 09:22:53.705645 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-j8578" Jan 29 09:23:05 crc kubenswrapper[4771]: I0129 09:23:05.655856 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k" Jan 29 09:23:14 crc kubenswrapper[4771]: I0129 09:23:14.271421 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:23:14 crc kubenswrapper[4771]: I0129 09:23:14.271977 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.169249 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zt24c"] Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170413 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerName="extract-utilities" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170430 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerName="extract-utilities" Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170444 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerName="extract-utilities" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170451 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerName="extract-utilities" Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170469 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerName="extract-content" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170476 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerName="extract-content" Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170484 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170490 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170516 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170522 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170533 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170539 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170547 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerName="extract-content" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170553 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerName="extract-content" Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170563 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="extract-content" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170569 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="extract-content" Jan 29 09:23:19 crc kubenswrapper[4771]: E0129 09:23:19.170582 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="extract-utilities" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170588 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="extract-utilities" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170762 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1211334-cd8b-4f3f-ac3e-0c9ab6e52cee" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170774 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b285b9-2a98-4751-a845-78ac87d4c3f1" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.170783 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b31489-fbef-4fd7-b789-8b20a2b565f1" containerName="registry-server" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.171603 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.179681 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.179964 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.180105 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.187894 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-b86p2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.208563 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zt24c"] Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.281631 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrv6\" (UniqueName: \"kubernetes.io/projected/cb64539d-7bde-40b8-9408-b201efdfe2a0-kube-api-access-lqrv6\") pod \"dnsmasq-dns-675f4bcbfc-zt24c\" (UID: \"cb64539d-7bde-40b8-9408-b201efdfe2a0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.281770 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb64539d-7bde-40b8-9408-b201efdfe2a0-config\") pod \"dnsmasq-dns-675f4bcbfc-zt24c\" (UID: \"cb64539d-7bde-40b8-9408-b201efdfe2a0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.287137 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5j9z2"] Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.289647 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.292575 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.299765 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5j9z2"] Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.383488 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86p7l\" (UniqueName: \"kubernetes.io/projected/234dd8a2-f5a4-4707-8073-7a10fded7e8f-kube-api-access-86p7l\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.383576 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb64539d-7bde-40b8-9408-b201efdfe2a0-config\") pod \"dnsmasq-dns-675f4bcbfc-zt24c\" (UID: \"cb64539d-7bde-40b8-9408-b201efdfe2a0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.383614 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-config\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.383641 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.383712 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrv6\" (UniqueName: \"kubernetes.io/projected/cb64539d-7bde-40b8-9408-b201efdfe2a0-kube-api-access-lqrv6\") pod \"dnsmasq-dns-675f4bcbfc-zt24c\" (UID: \"cb64539d-7bde-40b8-9408-b201efdfe2a0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.384680 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb64539d-7bde-40b8-9408-b201efdfe2a0-config\") pod \"dnsmasq-dns-675f4bcbfc-zt24c\" (UID: \"cb64539d-7bde-40b8-9408-b201efdfe2a0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.424427 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrv6\" (UniqueName: \"kubernetes.io/projected/cb64539d-7bde-40b8-9408-b201efdfe2a0-kube-api-access-lqrv6\") pod \"dnsmasq-dns-675f4bcbfc-zt24c\" (UID: \"cb64539d-7bde-40b8-9408-b201efdfe2a0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.484785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.485270 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86p7l\" (UniqueName: \"kubernetes.io/projected/234dd8a2-f5a4-4707-8073-7a10fded7e8f-kube-api-access-86p7l\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.485381 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-config\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.485985 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.486464 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-config\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.503896 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.504958 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86p7l\" (UniqueName: \"kubernetes.io/projected/234dd8a2-f5a4-4707-8073-7a10fded7e8f-kube-api-access-86p7l\") pod \"dnsmasq-dns-78dd6ddcc-5j9z2\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.614342 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:19 crc kubenswrapper[4771]: I0129 09:23:19.977731 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zt24c"] Jan 29 09:23:20 crc kubenswrapper[4771]: I0129 09:23:20.119383 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5j9z2"] Jan 29 09:23:20 crc kubenswrapper[4771]: W0129 09:23:20.125174 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod234dd8a2_f5a4_4707_8073_7a10fded7e8f.slice/crio-5e9b2273013432673ae7613d19b11b867cf7ef855fda4557918f402a9da8f359 WatchSource:0}: Error finding container 5e9b2273013432673ae7613d19b11b867cf7ef855fda4557918f402a9da8f359: Status 404 returned error can't find the container with id 5e9b2273013432673ae7613d19b11b867cf7ef855fda4557918f402a9da8f359 Jan 29 09:23:20 crc kubenswrapper[4771]: I0129 09:23:20.836838 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" event={"ID":"cb64539d-7bde-40b8-9408-b201efdfe2a0","Type":"ContainerStarted","Data":"20ec22648e1ca420b450dc308f94dde7fabb4ff38dba69d40a547f41b0b9f484"} Jan 29 09:23:20 crc kubenswrapper[4771]: I0129 09:23:20.854208 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" event={"ID":"234dd8a2-f5a4-4707-8073-7a10fded7e8f","Type":"ContainerStarted","Data":"5e9b2273013432673ae7613d19b11b867cf7ef855fda4557918f402a9da8f359"} Jan 29 09:23:21 crc kubenswrapper[4771]: I0129 09:23:21.958803 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zt24c"] Jan 29 09:23:21 crc kubenswrapper[4771]: I0129 09:23:21.985737 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lssgr"] Jan 29 09:23:21 crc kubenswrapper[4771]: I0129 09:23:21.987398 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.011886 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lssgr"] Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.155770 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.156237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-config\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.156293 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nzng\" (UniqueName: \"kubernetes.io/projected/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-kube-api-access-6nzng\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.264498 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.264560 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-config\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.264613 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nzng\" (UniqueName: \"kubernetes.io/projected/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-kube-api-access-6nzng\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.265967 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.269393 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-config\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.299023 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nzng\" (UniqueName: \"kubernetes.io/projected/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-kube-api-access-6nzng\") pod \"dnsmasq-dns-5ccc8479f9-lssgr\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.356411 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.355998 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5j9z2"] Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.382711 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rcsq"] Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.384085 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.396969 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rcsq"] Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.572290 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvrvf\" (UniqueName: \"kubernetes.io/projected/6ee629bb-c7bd-4915-a173-925d1a8582df-kube-api-access-wvrvf\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.572402 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-config\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.572445 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.674293 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvrvf\" (UniqueName: \"kubernetes.io/projected/6ee629bb-c7bd-4915-a173-925d1a8582df-kube-api-access-wvrvf\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.674374 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-config\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.674414 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.675593 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.676773 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-config\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.704538 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvrvf\" (UniqueName: \"kubernetes.io/projected/6ee629bb-c7bd-4915-a173-925d1a8582df-kube-api-access-wvrvf\") pod \"dnsmasq-dns-57d769cc4f-7rcsq\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.718284 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:22 crc kubenswrapper[4771]: I0129 09:23:22.940468 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lssgr"] Jan 29 09:23:22 crc kubenswrapper[4771]: W0129 09:23:22.947116 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod581dd645_a6f5_44f2_bdc0_d5ad52b14e78.slice/crio-a863dd8c774e6bfde6e79d3e151b7a62d77c15750f8c5ac98f931f110841e8ed WatchSource:0}: Error finding container a863dd8c774e6bfde6e79d3e151b7a62d77c15750f8c5ac98f931f110841e8ed: Status 404 returned error can't find the container with id a863dd8c774e6bfde6e79d3e151b7a62d77c15750f8c5ac98f931f110841e8ed Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.198314 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.206590 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.215215 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.215333 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.215429 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.215765 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.215911 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.222105 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-92qks" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.222594 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.225852 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.343733 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rcsq"] Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.399536 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.399629 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.399759 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.399793 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.399828 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.399859 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv4gc\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-kube-api-access-wv4gc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.399934 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.399974 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.400009 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.400035 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.400077 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503228 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503308 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503343 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503392 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503410 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503435 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503457 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv4gc\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-kube-api-access-wv4gc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503490 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503515 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503543 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.503557 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.504308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.504628 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.505272 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.505665 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.505891 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.506082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.517617 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.517851 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.525314 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.535842 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.536163 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv4gc\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-kube-api-access-wv4gc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.548888 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.550442 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.551802 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.555284 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.563404 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.563675 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.563825 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.564061 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.564366 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-px4wr" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.564506 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.606514 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709738 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709818 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9abaa29e-0912-445b-a09f-5ce90865a13b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709845 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709874 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709895 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709915 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709933 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9abaa29e-0912-445b-a09f-5ce90865a13b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709951 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.709984 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.710001 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.710024 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4tp\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-kube-api-access-5l4tp\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811368 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811463 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9abaa29e-0912-445b-a09f-5ce90865a13b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811492 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811527 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811558 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811585 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811635 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9abaa29e-0912-445b-a09f-5ce90865a13b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811657 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811718 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811743 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.811769 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l4tp\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-kube-api-access-5l4tp\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.812673 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.814567 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.814683 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.814869 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.815548 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.816136 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.817126 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.825813 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9abaa29e-0912-445b-a09f-5ce90865a13b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.826215 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.834075 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9abaa29e-0912-445b-a09f-5ce90865a13b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.837966 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.840492 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l4tp\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-kube-api-access-5l4tp\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.859938 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " pod="openstack/rabbitmq-server-0" Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.893418 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" event={"ID":"6ee629bb-c7bd-4915-a173-925d1a8582df","Type":"ContainerStarted","Data":"bd1aec2316ff16179a640f42f7829a6e480eeea1e42a2efd791521ce4e05328c"} Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.895534 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" event={"ID":"581dd645-a6f5-44f2-bdc0-d5ad52b14e78","Type":"ContainerStarted","Data":"a863dd8c774e6bfde6e79d3e151b7a62d77c15750f8c5ac98f931f110841e8ed"} Jan 29 09:23:23 crc kubenswrapper[4771]: I0129 09:23:23.917779 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:23:24 crc kubenswrapper[4771]: W0129 09:23:24.355744 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9abaa29e_0912_445b_a09f_5ce90865a13b.slice/crio-3fc2715288aff49f1a95f87411938857c6424bb586063dd449618347907bb346 WatchSource:0}: Error finding container 3fc2715288aff49f1a95f87411938857c6424bb586063dd449618347907bb346: Status 404 returned error can't find the container with id 3fc2715288aff49f1a95f87411938857c6424bb586063dd449618347907bb346 Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.357233 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.463574 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:23:24 crc kubenswrapper[4771]: W0129 09:23:24.479182 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3061d0c_7a27_4062_b2a7_12f8a1e1fac6.slice/crio-807ce9f538ce48bb57993bff1eab2a7316a1278943884a27e9847a3a91f180f4 WatchSource:0}: Error finding container 807ce9f538ce48bb57993bff1eab2a7316a1278943884a27e9847a3a91f180f4: Status 404 returned error can't find the container with id 807ce9f538ce48bb57993bff1eab2a7316a1278943884a27e9847a3a91f180f4 Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.593347 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.595668 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.599504 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-dx4zd" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.600145 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.600524 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.603454 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.606030 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.611551 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.734612 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.735055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.735152 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg87p\" (UniqueName: \"kubernetes.io/projected/bd70aa50-2651-4840-a551-44a608ccb08b-kube-api-access-dg87p\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.735174 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.735235 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd70aa50-2651-4840-a551-44a608ccb08b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.735293 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd70aa50-2651-4840-a551-44a608ccb08b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.735660 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd70aa50-2651-4840-a551-44a608ccb08b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.735809 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.840483 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.840737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.840867 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg87p\" (UniqueName: \"kubernetes.io/projected/bd70aa50-2651-4840-a551-44a608ccb08b-kube-api-access-dg87p\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.840911 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.840937 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.840950 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd70aa50-2651-4840-a551-44a608ccb08b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.841187 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd70aa50-2651-4840-a551-44a608ccb08b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.841249 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd70aa50-2651-4840-a551-44a608ccb08b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.841292 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.841547 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.842395 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.842609 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd70aa50-2651-4840-a551-44a608ccb08b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.842722 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd70aa50-2651-4840-a551-44a608ccb08b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.852161 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd70aa50-2651-4840-a551-44a608ccb08b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.877110 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd70aa50-2651-4840-a551-44a608ccb08b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.879429 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.888646 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg87p\" (UniqueName: \"kubernetes.io/projected/bd70aa50-2651-4840-a551-44a608ccb08b-kube-api-access-dg87p\") pod \"openstack-galera-0\" (UID: \"bd70aa50-2651-4840-a551-44a608ccb08b\") " pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.921077 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9abaa29e-0912-445b-a09f-5ce90865a13b","Type":"ContainerStarted","Data":"3fc2715288aff49f1a95f87411938857c6424bb586063dd449618347907bb346"} Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.938552 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 09:23:24 crc kubenswrapper[4771]: I0129 09:23:24.977636 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6","Type":"ContainerStarted","Data":"807ce9f538ce48bb57993bff1eab2a7316a1278943884a27e9847a3a91f180f4"} Jan 29 09:23:25 crc kubenswrapper[4771]: I0129 09:23:25.663379 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.000631 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bd70aa50-2651-4840-a551-44a608ccb08b","Type":"ContainerStarted","Data":"a415c316804d0393827fc73211196a759325fe05a802976817ea189068fbf96d"} Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.095320 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.096660 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.099725 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-whlrn" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.099725 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.099914 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.100175 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.108349 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.181888 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.182036 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.182070 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.182106 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ff06b9bb-31fc-437f-96fb-6ab586bb9918-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.182146 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.182175 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff06b9bb-31fc-437f-96fb-6ab586bb9918-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.182214 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tkn\" (UniqueName: \"kubernetes.io/projected/ff06b9bb-31fc-437f-96fb-6ab586bb9918-kube-api-access-f5tkn\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.182289 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff06b9bb-31fc-437f-96fb-6ab586bb9918-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.284497 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff06b9bb-31fc-437f-96fb-6ab586bb9918-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.284593 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.284668 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.284724 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.284756 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ff06b9bb-31fc-437f-96fb-6ab586bb9918-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.284789 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.284817 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff06b9bb-31fc-437f-96fb-6ab586bb9918-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.284844 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tkn\" (UniqueName: \"kubernetes.io/projected/ff06b9bb-31fc-437f-96fb-6ab586bb9918-kube-api-access-f5tkn\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.286127 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.288166 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.289487 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ff06b9bb-31fc-437f-96fb-6ab586bb9918-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.290543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.292674 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff06b9bb-31fc-437f-96fb-6ab586bb9918-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.313782 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff06b9bb-31fc-437f-96fb-6ab586bb9918-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.318008 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tkn\" (UniqueName: \"kubernetes.io/projected/ff06b9bb-31fc-437f-96fb-6ab586bb9918-kube-api-access-f5tkn\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.318221 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff06b9bb-31fc-437f-96fb-6ab586bb9918-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.319409 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.325173 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.333459 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ff06b9bb-31fc-437f-96fb-6ab586bb9918\") " pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.333889 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.334182 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.334397 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-67lp6" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.356306 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.387419 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71cb4a34-0373-453e-b103-3e6e0a00ff0c-kolla-config\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.387485 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqfql\" (UniqueName: \"kubernetes.io/projected/71cb4a34-0373-453e-b103-3e6e0a00ff0c-kube-api-access-tqfql\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.387551 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cb4a34-0373-453e-b103-3e6e0a00ff0c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.387609 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cb4a34-0373-453e-b103-3e6e0a00ff0c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.387633 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71cb4a34-0373-453e-b103-3e6e0a00ff0c-config-data\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.460274 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.489789 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cb4a34-0373-453e-b103-3e6e0a00ff0c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.489878 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cb4a34-0373-453e-b103-3e6e0a00ff0c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.489914 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71cb4a34-0373-453e-b103-3e6e0a00ff0c-config-data\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.489992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71cb4a34-0373-453e-b103-3e6e0a00ff0c-kolla-config\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.490032 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqfql\" (UniqueName: \"kubernetes.io/projected/71cb4a34-0373-453e-b103-3e6e0a00ff0c-kube-api-access-tqfql\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.491708 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71cb4a34-0373-453e-b103-3e6e0a00ff0c-kolla-config\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.491981 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71cb4a34-0373-453e-b103-3e6e0a00ff0c-config-data\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.494420 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cb4a34-0373-453e-b103-3e6e0a00ff0c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.504908 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cb4a34-0373-453e-b103-3e6e0a00ff0c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.508574 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqfql\" (UniqueName: \"kubernetes.io/projected/71cb4a34-0373-453e-b103-3e6e0a00ff0c-kube-api-access-tqfql\") pod \"memcached-0\" (UID: \"71cb4a34-0373-453e-b103-3e6e0a00ff0c\") " pod="openstack/memcached-0" Jan 29 09:23:26 crc kubenswrapper[4771]: I0129 09:23:26.713738 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 09:23:28 crc kubenswrapper[4771]: I0129 09:23:28.189931 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:23:28 crc kubenswrapper[4771]: I0129 09:23:28.191729 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:23:28 crc kubenswrapper[4771]: I0129 09:23:28.196001 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-fkdfr" Jan 29 09:23:28 crc kubenswrapper[4771]: I0129 09:23:28.207202 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:23:28 crc kubenswrapper[4771]: I0129 09:23:28.238888 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9rvh\" (UniqueName: \"kubernetes.io/projected/3004cf2e-c4f0-45ba-a5f5-ade209c47247-kube-api-access-j9rvh\") pod \"kube-state-metrics-0\" (UID: \"3004cf2e-c4f0-45ba-a5f5-ade209c47247\") " pod="openstack/kube-state-metrics-0" Jan 29 09:23:28 crc kubenswrapper[4771]: I0129 09:23:28.341615 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9rvh\" (UniqueName: \"kubernetes.io/projected/3004cf2e-c4f0-45ba-a5f5-ade209c47247-kube-api-access-j9rvh\") pod \"kube-state-metrics-0\" (UID: \"3004cf2e-c4f0-45ba-a5f5-ade209c47247\") " pod="openstack/kube-state-metrics-0" Jan 29 09:23:28 crc kubenswrapper[4771]: I0129 09:23:28.387138 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9rvh\" (UniqueName: \"kubernetes.io/projected/3004cf2e-c4f0-45ba-a5f5-ade209c47247-kube-api-access-j9rvh\") pod \"kube-state-metrics-0\" (UID: \"3004cf2e-c4f0-45ba-a5f5-ade209c47247\") " pod="openstack/kube-state-metrics-0" Jan 29 09:23:28 crc kubenswrapper[4771]: I0129 09:23:28.523344 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.806213 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hqvn7"] Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.807755 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.814946 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.815378 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.815827 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-8zzpz" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.827338 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hqvn7"] Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.843040 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-sjg8v"] Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.844821 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.883246 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sjg8v"] Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.916976 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-log-ovn\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917032 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-log\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917054 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1390576-f674-420d-93a7-2bee6d52f9f0-scripts\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917080 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2483b251-476f-45b5-a46e-60f4dfe1024f-scripts\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917193 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-etc-ovs\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917235 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7v9d\" (UniqueName: \"kubernetes.io/projected/e1390576-f674-420d-93a7-2bee6d52f9f0-kube-api-access-g7v9d\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917313 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1390576-f674-420d-93a7-2bee6d52f9f0-ovn-controller-tls-certs\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917478 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjg9v\" (UniqueName: \"kubernetes.io/projected/2483b251-476f-45b5-a46e-60f4dfe1024f-kube-api-access-bjg9v\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917571 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-run\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917635 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-run\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917665 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1390576-f674-420d-93a7-2bee6d52f9f0-combined-ca-bundle\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917789 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-lib\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:31 crc kubenswrapper[4771]: I0129 09:23:31.917819 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-run-ovn\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020321 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1390576-f674-420d-93a7-2bee6d52f9f0-ovn-controller-tls-certs\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020411 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjg9v\" (UniqueName: \"kubernetes.io/projected/2483b251-476f-45b5-a46e-60f4dfe1024f-kube-api-access-bjg9v\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020436 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-run\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020459 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-run\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020479 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1390576-f674-420d-93a7-2bee6d52f9f0-combined-ca-bundle\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020510 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-lib\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020531 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-run-ovn\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020573 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-log-ovn\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020593 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1390576-f674-420d-93a7-2bee6d52f9f0-scripts\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020614 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-log\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020639 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2483b251-476f-45b5-a46e-60f4dfe1024f-scripts\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020668 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-etc-ovs\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.020713 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7v9d\" (UniqueName: \"kubernetes.io/projected/e1390576-f674-420d-93a7-2bee6d52f9f0-kube-api-access-g7v9d\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.021107 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-lib\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.021233 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-run\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.021250 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-run\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.021366 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-run-ovn\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.021499 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1390576-f674-420d-93a7-2bee6d52f9f0-var-log-ovn\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.021594 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-var-log\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.021788 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2483b251-476f-45b5-a46e-60f4dfe1024f-etc-ovs\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.024315 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2483b251-476f-45b5-a46e-60f4dfe1024f-scripts\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.024299 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1390576-f674-420d-93a7-2bee6d52f9f0-scripts\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.034920 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1390576-f674-420d-93a7-2bee6d52f9f0-ovn-controller-tls-certs\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.035045 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1390576-f674-420d-93a7-2bee6d52f9f0-combined-ca-bundle\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.039053 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjg9v\" (UniqueName: \"kubernetes.io/projected/2483b251-476f-45b5-a46e-60f4dfe1024f-kube-api-access-bjg9v\") pod \"ovn-controller-ovs-sjg8v\" (UID: \"2483b251-476f-45b5-a46e-60f4dfe1024f\") " pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.042117 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7v9d\" (UniqueName: \"kubernetes.io/projected/e1390576-f674-420d-93a7-2bee6d52f9f0-kube-api-access-g7v9d\") pod \"ovn-controller-hqvn7\" (UID: \"e1390576-f674-420d-93a7-2bee6d52f9f0\") " pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.137652 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.163161 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.701954 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.703333 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.705442 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zx9zk" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.716641 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.716683 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.717055 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.717368 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.724585 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.735520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.735684 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.735752 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.735828 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.735866 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.735908 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.736043 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkwt\" (UniqueName: \"kubernetes.io/projected/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-kube-api-access-hgkwt\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.736135 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-config\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.837330 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkwt\" (UniqueName: \"kubernetes.io/projected/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-kube-api-access-hgkwt\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.837414 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-config\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.837457 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.837492 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.838550 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.838595 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.838619 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.838642 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.838795 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-config\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.839004 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.839077 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.839869 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.844015 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.844117 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.854923 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkwt\" (UniqueName: \"kubernetes.io/projected/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-kube-api-access-hgkwt\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.855729 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c641f1-e0cc-4892-8e36-9a70ee2bacc9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:32 crc kubenswrapper[4771]: I0129 09:23:32.861561 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:33 crc kubenswrapper[4771]: I0129 09:23:33.052581 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:34 crc kubenswrapper[4771]: I0129 09:23:34.930580 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 09:23:34 crc kubenswrapper[4771]: I0129 09:23:34.933127 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:34 crc kubenswrapper[4771]: I0129 09:23:34.934433 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 09:23:34 crc kubenswrapper[4771]: I0129 09:23:34.936051 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vbtb6" Jan 29 09:23:34 crc kubenswrapper[4771]: I0129 09:23:34.936336 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 09:23:34 crc kubenswrapper[4771]: I0129 09:23:34.936474 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 09:23:34 crc kubenswrapper[4771]: I0129 09:23:34.937894 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.089581 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.089650 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34887d57-0fb9-4617-b9d0-1338663bd16b-config\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.089745 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/34887d57-0fb9-4617-b9d0-1338663bd16b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.089820 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.089855 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz6df\" (UniqueName: \"kubernetes.io/projected/34887d57-0fb9-4617-b9d0-1338663bd16b-kube-api-access-fz6df\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.089881 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.089972 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.090000 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34887d57-0fb9-4617-b9d0-1338663bd16b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.196058 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.196177 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34887d57-0fb9-4617-b9d0-1338663bd16b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.196340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.196381 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34887d57-0fb9-4617-b9d0-1338663bd16b-config\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.196495 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/34887d57-0fb9-4617-b9d0-1338663bd16b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.196555 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.196589 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz6df\" (UniqueName: \"kubernetes.io/projected/34887d57-0fb9-4617-b9d0-1338663bd16b-kube-api-access-fz6df\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.196624 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.197326 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.197955 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34887d57-0fb9-4617-b9d0-1338663bd16b-config\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.197972 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34887d57-0fb9-4617-b9d0-1338663bd16b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.198263 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/34887d57-0fb9-4617-b9d0-1338663bd16b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.205575 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.205825 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.205983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/34887d57-0fb9-4617-b9d0-1338663bd16b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.216247 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz6df\" (UniqueName: \"kubernetes.io/projected/34887d57-0fb9-4617-b9d0-1338663bd16b-kube-api-access-fz6df\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.227183 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"34887d57-0fb9-4617-b9d0-1338663bd16b\") " pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:35 crc kubenswrapper[4771]: I0129 09:23:35.260311 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:40 crc kubenswrapper[4771]: E0129 09:23:40.940218 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 09:23:40 crc kubenswrapper[4771]: E0129 09:23:40.940968 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5l4tp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(9abaa29e-0912-445b-a09f-5ce90865a13b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:23:40 crc kubenswrapper[4771]: E0129 09:23:40.942154 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" Jan 29 09:23:40 crc kubenswrapper[4771]: E0129 09:23:40.964999 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 09:23:40 crc kubenswrapper[4771]: E0129 09:23:40.965243 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wv4gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(f3061d0c-7a27-4062-b2a7-12f8a1e1fac6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:23:40 crc kubenswrapper[4771]: E0129 09:23:40.967210 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" Jan 29 09:23:41 crc kubenswrapper[4771]: E0129 09:23:41.178362 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" Jan 29 09:23:41 crc kubenswrapper[4771]: E0129 09:23:41.180519 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" Jan 29 09:23:44 crc kubenswrapper[4771]: I0129 09:23:44.270954 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:23:44 crc kubenswrapper[4771]: I0129 09:23:44.271198 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:23:44 crc kubenswrapper[4771]: I0129 09:23:44.271230 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:23:44 crc kubenswrapper[4771]: I0129 09:23:44.271852 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20dd9c59445370fa21c3eb4e7a8f0add609e4bdb4bf039a1de172564e5cc26d6"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:23:44 crc kubenswrapper[4771]: I0129 09:23:44.271905 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://20dd9c59445370fa21c3eb4e7a8f0add609e4bdb4bf039a1de172564e5cc26d6" gracePeriod=600 Jan 29 09:23:45 crc kubenswrapper[4771]: I0129 09:23:45.205673 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="20dd9c59445370fa21c3eb4e7a8f0add609e4bdb4bf039a1de172564e5cc26d6" exitCode=0 Jan 29 09:23:45 crc kubenswrapper[4771]: I0129 09:23:45.205770 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"20dd9c59445370fa21c3eb4e7a8f0add609e4bdb4bf039a1de172564e5cc26d6"} Jan 29 09:23:45 crc kubenswrapper[4771]: I0129 09:23:45.205903 4771 scope.go:117] "RemoveContainer" containerID="03920492f3ef5aedc2a41c61dc5f9a95c03384d306c3153f2e8b409334342291" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.543197 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.543844 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nzng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-lssgr_openstack(581dd645-a6f5-44f2-bdc0-d5ad52b14e78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.545030 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" podUID="581dd645-a6f5-44f2-bdc0-d5ad52b14e78" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.614908 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.615398 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqrv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-zt24c_openstack(cb64539d-7bde-40b8-9408-b201efdfe2a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.616964 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" podUID="cb64539d-7bde-40b8-9408-b201efdfe2a0" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.625597 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.625799 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvrvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-7rcsq_openstack(6ee629bb-c7bd-4915-a173-925d1a8582df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.632830 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" podUID="6ee629bb-c7bd-4915-a173-925d1a8582df" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.651016 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.651185 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86p7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-5j9z2_openstack(234dd8a2-f5a4-4707-8073-7a10fded7e8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:23:47 crc kubenswrapper[4771]: E0129 09:23:47.652554 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" podUID="234dd8a2-f5a4-4707-8073-7a10fded7e8f" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.229227 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"1b3ab7b9f2df880b7295ed04362ae5024763966067568a703d675444eb8c341b"} Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.234025 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bd70aa50-2651-4840-a551-44a608ccb08b","Type":"ContainerStarted","Data":"5a198a04d2bb24c01e936a7b76a7329411d09308ddba32ea9d9b888dd674bc99"} Jan 29 09:23:48 crc kubenswrapper[4771]: E0129 09:23:48.235722 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" podUID="6ee629bb-c7bd-4915-a173-925d1a8582df" Jan 29 09:23:48 crc kubenswrapper[4771]: E0129 09:23:48.235924 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" podUID="581dd645-a6f5-44f2-bdc0-d5ad52b14e78" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.320285 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.343999 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hqvn7"] Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.354998 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.365512 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.501979 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 09:23:48 crc kubenswrapper[4771]: W0129 09:23:48.521751 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2c641f1_e0cc_4892_8e36_9a70ee2bacc9.slice/crio-f5c2af31e678d717d1480bb59a4f427ce75c367120239701e109939b406c5c82 WatchSource:0}: Error finding container f5c2af31e678d717d1480bb59a4f427ce75c367120239701e109939b406c5c82: Status 404 returned error can't find the container with id f5c2af31e678d717d1480bb59a4f427ce75c367120239701e109939b406c5c82 Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.630422 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.709496 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.772507 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqrv6\" (UniqueName: \"kubernetes.io/projected/cb64539d-7bde-40b8-9408-b201efdfe2a0-kube-api-access-lqrv6\") pod \"cb64539d-7bde-40b8-9408-b201efdfe2a0\" (UID: \"cb64539d-7bde-40b8-9408-b201efdfe2a0\") " Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.772798 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb64539d-7bde-40b8-9408-b201efdfe2a0-config\") pod \"cb64539d-7bde-40b8-9408-b201efdfe2a0\" (UID: \"cb64539d-7bde-40b8-9408-b201efdfe2a0\") " Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.773708 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb64539d-7bde-40b8-9408-b201efdfe2a0-config" (OuterVolumeSpecName: "config") pod "cb64539d-7bde-40b8-9408-b201efdfe2a0" (UID: "cb64539d-7bde-40b8-9408-b201efdfe2a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.778714 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb64539d-7bde-40b8-9408-b201efdfe2a0-kube-api-access-lqrv6" (OuterVolumeSpecName: "kube-api-access-lqrv6") pod "cb64539d-7bde-40b8-9408-b201efdfe2a0" (UID: "cb64539d-7bde-40b8-9408-b201efdfe2a0"). InnerVolumeSpecName "kube-api-access-lqrv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.795007 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.874471 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-config\") pod \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.874605 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86p7l\" (UniqueName: \"kubernetes.io/projected/234dd8a2-f5a4-4707-8073-7a10fded7e8f-kube-api-access-86p7l\") pod \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.874730 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-dns-svc\") pod \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\" (UID: \"234dd8a2-f5a4-4707-8073-7a10fded7e8f\") " Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.875142 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb64539d-7bde-40b8-9408-b201efdfe2a0-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.875161 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqrv6\" (UniqueName: \"kubernetes.io/projected/cb64539d-7bde-40b8-9408-b201efdfe2a0-kube-api-access-lqrv6\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.875538 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-config" (OuterVolumeSpecName: "config") pod "234dd8a2-f5a4-4707-8073-7a10fded7e8f" (UID: "234dd8a2-f5a4-4707-8073-7a10fded7e8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.875968 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "234dd8a2-f5a4-4707-8073-7a10fded7e8f" (UID: "234dd8a2-f5a4-4707-8073-7a10fded7e8f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.881003 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234dd8a2-f5a4-4707-8073-7a10fded7e8f-kube-api-access-86p7l" (OuterVolumeSpecName: "kube-api-access-86p7l") pod "234dd8a2-f5a4-4707-8073-7a10fded7e8f" (UID: "234dd8a2-f5a4-4707-8073-7a10fded7e8f"). InnerVolumeSpecName "kube-api-access-86p7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.977355 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.977414 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234dd8a2-f5a4-4707-8073-7a10fded7e8f-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:48 crc kubenswrapper[4771]: I0129 09:23:48.977426 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86p7l\" (UniqueName: \"kubernetes.io/projected/234dd8a2-f5a4-4707-8073-7a10fded7e8f-kube-api-access-86p7l\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.088988 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sjg8v"] Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.254872 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3004cf2e-c4f0-45ba-a5f5-ade209c47247","Type":"ContainerStarted","Data":"a527de3e03505bd01003fe3fd25a7ab029f98c952c20e15900ce335dcdd16eea"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.258782 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"71cb4a34-0373-453e-b103-3e6e0a00ff0c","Type":"ContainerStarted","Data":"8f4d8da0ff118f0de56899b7c85bc3ee35ced3ce820e29f00685ded3f3209e4c"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.262124 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqvn7" event={"ID":"e1390576-f674-420d-93a7-2bee6d52f9f0","Type":"ContainerStarted","Data":"13242fc065d032e1c11477b3cafb310002f9abc3916de005054003b2253d0c0d"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.264478 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"34887d57-0fb9-4617-b9d0-1338663bd16b","Type":"ContainerStarted","Data":"f4ac61b6dfef108183369942ef03eded3b16a5d4f644735f3d08f17cffec4dc2"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.266838 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" event={"ID":"234dd8a2-f5a4-4707-8073-7a10fded7e8f","Type":"ContainerDied","Data":"5e9b2273013432673ae7613d19b11b867cf7ef855fda4557918f402a9da8f359"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.266857 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5j9z2" Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.271277 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9","Type":"ContainerStarted","Data":"f5c2af31e678d717d1480bb59a4f427ce75c367120239701e109939b406c5c82"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.275819 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.276564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-zt24c" event={"ID":"cb64539d-7bde-40b8-9408-b201efdfe2a0","Type":"ContainerDied","Data":"20ec22648e1ca420b450dc308f94dde7fabb4ff38dba69d40a547f41b0b9f484"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.280049 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sjg8v" event={"ID":"2483b251-476f-45b5-a46e-60f4dfe1024f","Type":"ContainerStarted","Data":"20d7a55d1222de9b2565abfa54060a8aa2c088ea321677cffeb117d526c17c3a"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.287352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ff06b9bb-31fc-437f-96fb-6ab586bb9918","Type":"ContainerStarted","Data":"f07dbb223d12064eb7d061f7f5bb0a7a93196652800ec1d9a8d56c7b6931de51"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.287406 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ff06b9bb-31fc-437f-96fb-6ab586bb9918","Type":"ContainerStarted","Data":"1a966081bc79a039609726e0d7db1b128323c31d60764d77c8132ae383f81546"} Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.376702 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5j9z2"] Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.397752 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5j9z2"] Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.423590 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zt24c"] Jan 29 09:23:49 crc kubenswrapper[4771]: I0129 09:23:49.430757 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zt24c"] Jan 29 09:23:50 crc kubenswrapper[4771]: I0129 09:23:50.851218 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234dd8a2-f5a4-4707-8073-7a10fded7e8f" path="/var/lib/kubelet/pods/234dd8a2-f5a4-4707-8073-7a10fded7e8f/volumes" Jan 29 09:23:50 crc kubenswrapper[4771]: I0129 09:23:50.852047 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb64539d-7bde-40b8-9408-b201efdfe2a0" path="/var/lib/kubelet/pods/cb64539d-7bde-40b8-9408-b201efdfe2a0/volumes" Jan 29 09:23:52 crc kubenswrapper[4771]: I0129 09:23:52.322509 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff06b9bb-31fc-437f-96fb-6ab586bb9918" containerID="f07dbb223d12064eb7d061f7f5bb0a7a93196652800ec1d9a8d56c7b6931de51" exitCode=0 Jan 29 09:23:52 crc kubenswrapper[4771]: I0129 09:23:52.322640 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ff06b9bb-31fc-437f-96fb-6ab586bb9918","Type":"ContainerDied","Data":"f07dbb223d12064eb7d061f7f5bb0a7a93196652800ec1d9a8d56c7b6931de51"} Jan 29 09:23:52 crc kubenswrapper[4771]: I0129 09:23:52.329082 4771 generic.go:334] "Generic (PLEG): container finished" podID="bd70aa50-2651-4840-a551-44a608ccb08b" containerID="5a198a04d2bb24c01e936a7b76a7329411d09308ddba32ea9d9b888dd674bc99" exitCode=0 Jan 29 09:23:52 crc kubenswrapper[4771]: I0129 09:23:52.329145 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bd70aa50-2651-4840-a551-44a608ccb08b","Type":"ContainerDied","Data":"5a198a04d2bb24c01e936a7b76a7329411d09308ddba32ea9d9b888dd674bc99"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.371028 4771 generic.go:334] "Generic (PLEG): container finished" podID="2483b251-476f-45b5-a46e-60f4dfe1024f" containerID="0b870056e8519e893408a1bb5bfc3ac6c8f615694fcebde6bf4ccc6b80d42658" exitCode=0 Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.371118 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sjg8v" event={"ID":"2483b251-476f-45b5-a46e-60f4dfe1024f","Type":"ContainerDied","Data":"0b870056e8519e893408a1bb5bfc3ac6c8f615694fcebde6bf4ccc6b80d42658"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.378080 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqvn7" event={"ID":"e1390576-f674-420d-93a7-2bee6d52f9f0","Type":"ContainerStarted","Data":"dd9d517b9535b99c833d4b6e72ddb5f1b0a7182195a92996ae7071d8e5f1f760"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.378972 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hqvn7" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.382770 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"34887d57-0fb9-4617-b9d0-1338663bd16b","Type":"ContainerStarted","Data":"3124b4ff9e33fa265304d4846c0de3250d047461ae405773113ff5c533b1a44f"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.386239 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ff06b9bb-31fc-437f-96fb-6ab586bb9918","Type":"ContainerStarted","Data":"cc4513f292bcd10395ade5b34f992fb6d4dd0aa891478ad201eca4dd665cbb21"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.388858 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9","Type":"ContainerStarted","Data":"4a879d95ff531576ac3428da4d99cdc47a70e646f7f25594dbfb0dbabd6f0fae"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.390578 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3004cf2e-c4f0-45ba-a5f5-ade209c47247","Type":"ContainerStarted","Data":"927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.390725 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.393049 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bd70aa50-2651-4840-a551-44a608ccb08b","Type":"ContainerStarted","Data":"a2ab8bd7795e2de60893bbbc650d95adad0f131b4f97d9a16144a46b298b2637"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.395739 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"71cb4a34-0373-453e-b103-3e6e0a00ff0c","Type":"ContainerStarted","Data":"a5667d5092cb010836ccd4d1f0a04eb24de7807173639cf57ad8a81ddeb116bd"} Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.396321 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.471730 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hqvn7" podStartSLOduration=18.805786179000002 podStartE2EDuration="23.470785879s" podCreationTimestamp="2026-01-29 09:23:31 +0000 UTC" firstStartedPulling="2026-01-29 09:23:48.422407159 +0000 UTC m=+1048.545247386" lastFinishedPulling="2026-01-29 09:23:53.087406859 +0000 UTC m=+1053.210247086" observedRunningTime="2026-01-29 09:23:54.459733778 +0000 UTC m=+1054.582574005" watchObservedRunningTime="2026-01-29 09:23:54.470785879 +0000 UTC m=+1054.593626126" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.490834 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.016869221 podStartE2EDuration="28.490813715s" podCreationTimestamp="2026-01-29 09:23:26 +0000 UTC" firstStartedPulling="2026-01-29 09:23:48.466978234 +0000 UTC m=+1048.589818461" lastFinishedPulling="2026-01-29 09:23:52.940922728 +0000 UTC m=+1053.063762955" observedRunningTime="2026-01-29 09:23:54.48110258 +0000 UTC m=+1054.603942807" watchObservedRunningTime="2026-01-29 09:23:54.490813715 +0000 UTC m=+1054.613653942" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.528093 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.52806237 podStartE2EDuration="29.52806237s" podCreationTimestamp="2026-01-29 09:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:23:54.519043764 +0000 UTC m=+1054.641883991" watchObservedRunningTime="2026-01-29 09:23:54.52806237 +0000 UTC m=+1054.650902597" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.541949 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.309636682 podStartE2EDuration="26.541909907s" podCreationTimestamp="2026-01-29 09:23:28 +0000 UTC" firstStartedPulling="2026-01-29 09:23:48.445156079 +0000 UTC m=+1048.567996306" lastFinishedPulling="2026-01-29 09:23:53.677429304 +0000 UTC m=+1053.800269531" observedRunningTime="2026-01-29 09:23:54.533643702 +0000 UTC m=+1054.656483939" watchObservedRunningTime="2026-01-29 09:23:54.541909907 +0000 UTC m=+1054.664750134" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.569550 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.628480395 podStartE2EDuration="31.569516809s" podCreationTimestamp="2026-01-29 09:23:23 +0000 UTC" firstStartedPulling="2026-01-29 09:23:25.673918027 +0000 UTC m=+1025.796758254" lastFinishedPulling="2026-01-29 09:23:47.614954441 +0000 UTC m=+1047.737794668" observedRunningTime="2026-01-29 09:23:54.560116393 +0000 UTC m=+1054.682956630" watchObservedRunningTime="2026-01-29 09:23:54.569516809 +0000 UTC m=+1054.692357036" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.938909 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 09:23:54 crc kubenswrapper[4771]: I0129 09:23:54.939447 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.201292 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q7x7j"] Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.202666 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.209093 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.210008 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q7x7j"] Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.316301 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3bde3888-b70c-434c-b553-da79ce5ff68d-ovs-rundir\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.316628 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bde3888-b70c-434c-b553-da79ce5ff68d-config\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.316723 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b74ft\" (UniqueName: \"kubernetes.io/projected/3bde3888-b70c-434c-b553-da79ce5ff68d-kube-api-access-b74ft\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.316746 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3bde3888-b70c-434c-b553-da79ce5ff68d-ovn-rundir\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.316888 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bde3888-b70c-434c-b553-da79ce5ff68d-combined-ca-bundle\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.316939 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bde3888-b70c-434c-b553-da79ce5ff68d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.359091 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lssgr"] Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.411774 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-hbrs9"] Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.413329 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.419573 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3bde3888-b70c-434c-b553-da79ce5ff68d-ovs-rundir\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.419637 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bde3888-b70c-434c-b553-da79ce5ff68d-config\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.419673 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b74ft\" (UniqueName: \"kubernetes.io/projected/3bde3888-b70c-434c-b553-da79ce5ff68d-kube-api-access-b74ft\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.419706 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3bde3888-b70c-434c-b553-da79ce5ff68d-ovn-rundir\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.419746 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bde3888-b70c-434c-b553-da79ce5ff68d-combined-ca-bundle\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.419774 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bde3888-b70c-434c-b553-da79ce5ff68d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.420651 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3bde3888-b70c-434c-b553-da79ce5ff68d-ovs-rundir\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.420961 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3bde3888-b70c-434c-b553-da79ce5ff68d-ovn-rundir\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.421490 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bde3888-b70c-434c-b553-da79ce5ff68d-config\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.425200 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.427666 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-hbrs9"] Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.451592 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sjg8v" event={"ID":"2483b251-476f-45b5-a46e-60f4dfe1024f","Type":"ContainerStarted","Data":"a9d360c239e5a7895f927f14c003e3c3cb448eba9c8365afb3c54bf633219d95"} Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.451663 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sjg8v" event={"ID":"2483b251-476f-45b5-a46e-60f4dfe1024f","Type":"ContainerStarted","Data":"01fb1bfdb7300b34b10e81df766a72dfd14ebd4d3509222009f4c92e8f3ea39c"} Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.452904 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.452999 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.488352 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-sjg8v" podStartSLOduration=20.750433311 podStartE2EDuration="24.488326231s" podCreationTimestamp="2026-01-29 09:23:31 +0000 UTC" firstStartedPulling="2026-01-29 09:23:49.203298075 +0000 UTC m=+1049.326138302" lastFinishedPulling="2026-01-29 09:23:52.941190995 +0000 UTC m=+1053.064031222" observedRunningTime="2026-01-29 09:23:55.477236859 +0000 UTC m=+1055.600077106" watchObservedRunningTime="2026-01-29 09:23:55.488326231 +0000 UTC m=+1055.611166468" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.521321 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-config\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.521522 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjtp\" (UniqueName: \"kubernetes.io/projected/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-kube-api-access-rpjtp\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.521563 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.521594 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.576433 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rcsq"] Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.599077 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bde3888-b70c-434c-b553-da79ce5ff68d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.599123 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b74ft\" (UniqueName: \"kubernetes.io/projected/3bde3888-b70c-434c-b553-da79ce5ff68d-kube-api-access-b74ft\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.599122 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bde3888-b70c-434c-b553-da79ce5ff68d-combined-ca-bundle\") pod \"ovn-controller-metrics-q7x7j\" (UID: \"3bde3888-b70c-434c-b553-da79ce5ff68d\") " pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.608744 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-gj54q"] Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.610205 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.616831 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.629308 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.629364 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.629443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-config\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.629522 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpjtp\" (UniqueName: \"kubernetes.io/projected/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-kube-api-access-rpjtp\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.630674 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.631203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.631881 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-config\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.660216 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gj54q"] Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.673795 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpjtp\" (UniqueName: \"kubernetes.io/projected/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-kube-api-access-rpjtp\") pod \"dnsmasq-dns-6bc7876d45-hbrs9\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.732179 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.732398 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtwg\" (UniqueName: \"kubernetes.io/projected/63eb7921-71e0-4fa1-ba56-085f331e8a41-kube-api-access-kxtwg\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.732505 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-dns-svc\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.732586 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.732712 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-config\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.761218 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.836957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-config\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.837061 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.837102 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxtwg\" (UniqueName: \"kubernetes.io/projected/63eb7921-71e0-4fa1-ba56-085f331e8a41-kube-api-access-kxtwg\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.837148 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-dns-svc\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.837192 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.838284 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.839531 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.840161 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-dns-svc\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.840399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-config\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.849648 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q7x7j" Jan 29 09:23:55 crc kubenswrapper[4771]: I0129 09:23:55.860292 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxtwg\" (UniqueName: \"kubernetes.io/projected/63eb7921-71e0-4fa1-ba56-085f331e8a41-kube-api-access-kxtwg\") pod \"dnsmasq-dns-8554648995-gj54q\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.023484 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.279911 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.286764 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.348335 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nzng\" (UniqueName: \"kubernetes.io/projected/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-kube-api-access-6nzng\") pod \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.348501 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-config\") pod \"6ee629bb-c7bd-4915-a173-925d1a8582df\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.348568 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-dns-svc\") pod \"6ee629bb-c7bd-4915-a173-925d1a8582df\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.348617 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-dns-svc\") pod \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.348649 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvrvf\" (UniqueName: \"kubernetes.io/projected/6ee629bb-c7bd-4915-a173-925d1a8582df-kube-api-access-wvrvf\") pod \"6ee629bb-c7bd-4915-a173-925d1a8582df\" (UID: \"6ee629bb-c7bd-4915-a173-925d1a8582df\") " Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.348671 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-config\") pod \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\" (UID: \"581dd645-a6f5-44f2-bdc0-d5ad52b14e78\") " Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.349582 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ee629bb-c7bd-4915-a173-925d1a8582df" (UID: "6ee629bb-c7bd-4915-a173-925d1a8582df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.350091 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-config" (OuterVolumeSpecName: "config") pod "581dd645-a6f5-44f2-bdc0-d5ad52b14e78" (UID: "581dd645-a6f5-44f2-bdc0-d5ad52b14e78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.350280 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.350798 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "581dd645-a6f5-44f2-bdc0-d5ad52b14e78" (UID: "581dd645-a6f5-44f2-bdc0-d5ad52b14e78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.350824 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-config" (OuterVolumeSpecName: "config") pod "6ee629bb-c7bd-4915-a173-925d1a8582df" (UID: "6ee629bb-c7bd-4915-a173-925d1a8582df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.359771 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-kube-api-access-6nzng" (OuterVolumeSpecName: "kube-api-access-6nzng") pod "581dd645-a6f5-44f2-bdc0-d5ad52b14e78" (UID: "581dd645-a6f5-44f2-bdc0-d5ad52b14e78"). InnerVolumeSpecName "kube-api-access-6nzng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.361313 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee629bb-c7bd-4915-a173-925d1a8582df-kube-api-access-wvrvf" (OuterVolumeSpecName: "kube-api-access-wvrvf") pod "6ee629bb-c7bd-4915-a173-925d1a8582df" (UID: "6ee629bb-c7bd-4915-a173-925d1a8582df"). InnerVolumeSpecName "kube-api-access-wvrvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.452547 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nzng\" (UniqueName: \"kubernetes.io/projected/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-kube-api-access-6nzng\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.453117 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee629bb-c7bd-4915-a173-925d1a8582df-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.453129 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.453139 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvrvf\" (UniqueName: \"kubernetes.io/projected/6ee629bb-c7bd-4915-a173-925d1a8582df-kube-api-access-wvrvf\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.453149 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581dd645-a6f5-44f2-bdc0-d5ad52b14e78-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.466552 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.466776 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.473925 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" event={"ID":"581dd645-a6f5-44f2-bdc0-d5ad52b14e78","Type":"ContainerDied","Data":"a863dd8c774e6bfde6e79d3e151b7a62d77c15750f8c5ac98f931f110841e8ed"} Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.474049 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lssgr" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.486096 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9abaa29e-0912-445b-a09f-5ce90865a13b","Type":"ContainerStarted","Data":"2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4"} Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.490409 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6","Type":"ContainerStarted","Data":"a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02"} Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.501181 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.502250 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7rcsq" event={"ID":"6ee629bb-c7bd-4915-a173-925d1a8582df","Type":"ContainerDied","Data":"bd1aec2316ff16179a640f42f7829a6e480eeea1e42a2efd791521ce4e05328c"} Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.624056 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lssgr"] Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.645757 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lssgr"] Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.664775 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rcsq"] Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.672213 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7rcsq"] Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.787432 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gj54q"] Jan 29 09:23:56 crc kubenswrapper[4771]: W0129 09:23:56.794188 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63eb7921_71e0_4fa1_ba56_085f331e8a41.slice/crio-f85e1c3373b4102e078cb4104c12d7d1a4e42c6d14af3d28db6854520f9de37a WatchSource:0}: Error finding container f85e1c3373b4102e078cb4104c12d7d1a4e42c6d14af3d28db6854520f9de37a: Status 404 returned error can't find the container with id f85e1c3373b4102e078cb4104c12d7d1a4e42c6d14af3d28db6854520f9de37a Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.857124 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581dd645-a6f5-44f2-bdc0-d5ad52b14e78" path="/var/lib/kubelet/pods/581dd645-a6f5-44f2-bdc0-d5ad52b14e78/volumes" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.857566 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee629bb-c7bd-4915-a173-925d1a8582df" path="/var/lib/kubelet/pods/6ee629bb-c7bd-4915-a173-925d1a8582df/volumes" Jan 29 09:23:56 crc kubenswrapper[4771]: I0129 09:23:56.883605 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-hbrs9"] Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.040833 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q7x7j"] Jan 29 09:23:57 crc kubenswrapper[4771]: W0129 09:23:57.046176 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bde3888_b70c_434c_b553_da79ce5ff68d.slice/crio-e162fa7451e878069a5569e2547ecd6df82678b8204ccdacb22e99aab1426c9e WatchSource:0}: Error finding container e162fa7451e878069a5569e2547ecd6df82678b8204ccdacb22e99aab1426c9e: Status 404 returned error can't find the container with id e162fa7451e878069a5569e2547ecd6df82678b8204ccdacb22e99aab1426c9e Jan 29 09:23:57 crc kubenswrapper[4771]: E0129 09:23:57.486936 4771 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.98:38592->38.129.56.98:40437: read tcp 38.129.56.98:38592->38.129.56.98:40437: read: connection reset by peer Jan 29 09:23:57 crc kubenswrapper[4771]: E0129 09:23:57.486959 4771 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.98:38592->38.129.56.98:40437: write tcp 38.129.56.98:38592->38.129.56.98:40437: write: broken pipe Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.508983 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" event={"ID":"6c38cd72-7f98-42a2-b5f4-abf5f9125c90","Type":"ContainerStarted","Data":"020c708cda82d821839514b47c456b9aadf2700c6293c056649f6fb9c594b21d"} Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.510890 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"34887d57-0fb9-4617-b9d0-1338663bd16b","Type":"ContainerStarted","Data":"cb37cb67b73e8142a5d6b989e1243bde1834be0e062f7ecf018fcafa248b7323"} Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.513430 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q7x7j" event={"ID":"3bde3888-b70c-434c-b553-da79ce5ff68d","Type":"ContainerStarted","Data":"8f2c4c6cb8c365e500d28bdc154f2528950ef7718907308c5fba352cd693f543"} Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.513479 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q7x7j" event={"ID":"3bde3888-b70c-434c-b553-da79ce5ff68d","Type":"ContainerStarted","Data":"e162fa7451e878069a5569e2547ecd6df82678b8204ccdacb22e99aab1426c9e"} Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.516345 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d2c641f1-e0cc-4892-8e36-9a70ee2bacc9","Type":"ContainerStarted","Data":"44b72e0b07c0272173339c218487caf482cdffa8a92c63311350d43ec727c73a"} Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.518214 4771 generic.go:334] "Generic (PLEG): container finished" podID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerID="96049b868258091d10ae1183a42d93e1cf261ac012fa2fbb2b7576c9f8744648" exitCode=0 Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.518278 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gj54q" event={"ID":"63eb7921-71e0-4fa1-ba56-085f331e8a41","Type":"ContainerDied","Data":"96049b868258091d10ae1183a42d93e1cf261ac012fa2fbb2b7576c9f8744648"} Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.518315 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gj54q" event={"ID":"63eb7921-71e0-4fa1-ba56-085f331e8a41","Type":"ContainerStarted","Data":"f85e1c3373b4102e078cb4104c12d7d1a4e42c6d14af3d28db6854520f9de37a"} Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.542637 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.845116042 podStartE2EDuration="24.542612691s" podCreationTimestamp="2026-01-29 09:23:33 +0000 UTC" firstStartedPulling="2026-01-29 09:23:48.65145092 +0000 UTC m=+1048.774291147" lastFinishedPulling="2026-01-29 09:23:56.348947549 +0000 UTC m=+1056.471787796" observedRunningTime="2026-01-29 09:23:57.53082714 +0000 UTC m=+1057.653667387" watchObservedRunningTime="2026-01-29 09:23:57.542612691 +0000 UTC m=+1057.665452918" Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.575958 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q7x7j" podStartSLOduration=2.575938189 podStartE2EDuration="2.575938189s" podCreationTimestamp="2026-01-29 09:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:23:57.57302512 +0000 UTC m=+1057.695865367" watchObservedRunningTime="2026-01-29 09:23:57.575938189 +0000 UTC m=+1057.698778426" Jan 29 09:23:57 crc kubenswrapper[4771]: I0129 09:23:57.620982 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.8344356 podStartE2EDuration="26.620948795s" podCreationTimestamp="2026-01-29 09:23:31 +0000 UTC" firstStartedPulling="2026-01-29 09:23:48.523486983 +0000 UTC m=+1048.646327210" lastFinishedPulling="2026-01-29 09:23:56.310000178 +0000 UTC m=+1056.432840405" observedRunningTime="2026-01-29 09:23:57.613126232 +0000 UTC m=+1057.735966459" watchObservedRunningTime="2026-01-29 09:23:57.620948795 +0000 UTC m=+1057.743789022" Jan 29 09:23:58 crc kubenswrapper[4771]: I0129 09:23:58.053811 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 09:23:58 crc kubenswrapper[4771]: I0129 09:23:58.528625 4771 generic.go:334] "Generic (PLEG): container finished" podID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" containerID="634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2" exitCode=0 Jan 29 09:23:58 crc kubenswrapper[4771]: I0129 09:23:58.528744 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" event={"ID":"6c38cd72-7f98-42a2-b5f4-abf5f9125c90","Type":"ContainerDied","Data":"634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2"} Jan 29 09:23:58 crc kubenswrapper[4771]: I0129 09:23:58.531912 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gj54q" event={"ID":"63eb7921-71e0-4fa1-ba56-085f331e8a41","Type":"ContainerStarted","Data":"dbdfeb7bb5216528b89fbd99bd7109b88474c7686c70a92fc061c5e68f142061"} Jan 29 09:23:58 crc kubenswrapper[4771]: I0129 09:23:58.584028 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-gj54q" podStartSLOduration=3.182821524 podStartE2EDuration="3.583991374s" podCreationTimestamp="2026-01-29 09:23:55 +0000 UTC" firstStartedPulling="2026-01-29 09:23:56.798001294 +0000 UTC m=+1056.920841521" lastFinishedPulling="2026-01-29 09:23:57.199171144 +0000 UTC m=+1057.322011371" observedRunningTime="2026-01-29 09:23:58.578222217 +0000 UTC m=+1058.701062454" watchObservedRunningTime="2026-01-29 09:23:58.583991374 +0000 UTC m=+1058.706831601" Jan 29 09:23:58 crc kubenswrapper[4771]: I0129 09:23:58.599238 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:58 crc kubenswrapper[4771]: I0129 09:23:58.693867 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 09:23:59 crc kubenswrapper[4771]: I0129 09:23:59.260923 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:59 crc kubenswrapper[4771]: I0129 09:23:59.309098 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:59 crc kubenswrapper[4771]: I0129 09:23:59.558327 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" event={"ID":"6c38cd72-7f98-42a2-b5f4-abf5f9125c90","Type":"ContainerStarted","Data":"71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04"} Jan 29 09:23:59 crc kubenswrapper[4771]: I0129 09:23:59.559114 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:23:59 crc kubenswrapper[4771]: I0129 09:23:59.559141 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:23:59 crc kubenswrapper[4771]: I0129 09:23:59.559155 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 09:23:59 crc kubenswrapper[4771]: I0129 09:23:59.580890 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" podStartSLOduration=4.171559441 podStartE2EDuration="4.580871693s" podCreationTimestamp="2026-01-29 09:23:55 +0000 UTC" firstStartedPulling="2026-01-29 09:23:56.899002476 +0000 UTC m=+1057.021842703" lastFinishedPulling="2026-01-29 09:23:57.308314688 +0000 UTC m=+1057.431154955" observedRunningTime="2026-01-29 09:23:59.576242727 +0000 UTC m=+1059.699082954" watchObservedRunningTime="2026-01-29 09:23:59.580871693 +0000 UTC m=+1059.703711920" Jan 29 09:23:59 crc kubenswrapper[4771]: I0129 09:23:59.612989 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.053071 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.104777 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.616582 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.800844 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.802196 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.808208 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.808529 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.808729 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.808893 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6wqd4" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.822566 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.945869 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3656052-f3d0-4665-9fc7-8236cede743b-config\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.946269 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3656052-f3d0-4665-9fc7-8236cede743b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.946397 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.946428 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.946578 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.946755 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3656052-f3d0-4665-9fc7-8236cede743b-scripts\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:00 crc kubenswrapper[4771]: I0129 09:24:00.946948 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wffw9\" (UniqueName: \"kubernetes.io/projected/b3656052-f3d0-4665-9fc7-8236cede743b-kube-api-access-wffw9\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.048718 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3656052-f3d0-4665-9fc7-8236cede743b-config\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.048829 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3656052-f3d0-4665-9fc7-8236cede743b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.048863 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.048881 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.048922 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.048967 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3656052-f3d0-4665-9fc7-8236cede743b-scripts\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.049001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wffw9\" (UniqueName: \"kubernetes.io/projected/b3656052-f3d0-4665-9fc7-8236cede743b-kube-api-access-wffw9\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.049543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3656052-f3d0-4665-9fc7-8236cede743b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.050101 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3656052-f3d0-4665-9fc7-8236cede743b-config\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.050111 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3656052-f3d0-4665-9fc7-8236cede743b-scripts\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.053677 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.055688 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.056438 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.056608 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3656052-f3d0-4665-9fc7-8236cede743b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.072716 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wffw9\" (UniqueName: \"kubernetes.io/projected/b3656052-f3d0-4665-9fc7-8236cede743b-kube-api-access-wffw9\") pod \"ovn-northd-0\" (UID: \"b3656052-f3d0-4665-9fc7-8236cede743b\") " pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.127105 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.151015 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.655171 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.716983 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.991416 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-j5t4z"] Jan 29 09:24:01 crc kubenswrapper[4771]: I0129 09:24:01.993059 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.006722 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5efa-account-create-update-6vnhr"] Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.008021 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.010953 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.019047 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-j5t4z"] Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.027641 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5efa-account-create-update-6vnhr"] Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.071193 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1e97643-6806-457b-998d-82ddd64ccd99-operator-scripts\") pod \"glance-5efa-account-create-update-6vnhr\" (UID: \"f1e97643-6806-457b-998d-82ddd64ccd99\") " pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.071288 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjnp\" (UniqueName: \"kubernetes.io/projected/152cca8c-e161-488f-b400-ec92a43fd836-kube-api-access-qjjnp\") pod \"glance-db-create-j5t4z\" (UID: \"152cca8c-e161-488f-b400-ec92a43fd836\") " pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.071343 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152cca8c-e161-488f-b400-ec92a43fd836-operator-scripts\") pod \"glance-db-create-j5t4z\" (UID: \"152cca8c-e161-488f-b400-ec92a43fd836\") " pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.071401 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmdc\" (UniqueName: \"kubernetes.io/projected/f1e97643-6806-457b-998d-82ddd64ccd99-kube-api-access-xgmdc\") pod \"glance-5efa-account-create-update-6vnhr\" (UID: \"f1e97643-6806-457b-998d-82ddd64ccd99\") " pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.172676 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjnp\" (UniqueName: \"kubernetes.io/projected/152cca8c-e161-488f-b400-ec92a43fd836-kube-api-access-qjjnp\") pod \"glance-db-create-j5t4z\" (UID: \"152cca8c-e161-488f-b400-ec92a43fd836\") " pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.173251 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152cca8c-e161-488f-b400-ec92a43fd836-operator-scripts\") pod \"glance-db-create-j5t4z\" (UID: \"152cca8c-e161-488f-b400-ec92a43fd836\") " pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.173341 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgmdc\" (UniqueName: \"kubernetes.io/projected/f1e97643-6806-457b-998d-82ddd64ccd99-kube-api-access-xgmdc\") pod \"glance-5efa-account-create-update-6vnhr\" (UID: \"f1e97643-6806-457b-998d-82ddd64ccd99\") " pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.173453 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1e97643-6806-457b-998d-82ddd64ccd99-operator-scripts\") pod \"glance-5efa-account-create-update-6vnhr\" (UID: \"f1e97643-6806-457b-998d-82ddd64ccd99\") " pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.174718 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1e97643-6806-457b-998d-82ddd64ccd99-operator-scripts\") pod \"glance-5efa-account-create-update-6vnhr\" (UID: \"f1e97643-6806-457b-998d-82ddd64ccd99\") " pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.175150 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152cca8c-e161-488f-b400-ec92a43fd836-operator-scripts\") pod \"glance-db-create-j5t4z\" (UID: \"152cca8c-e161-488f-b400-ec92a43fd836\") " pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.198577 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjnp\" (UniqueName: \"kubernetes.io/projected/152cca8c-e161-488f-b400-ec92a43fd836-kube-api-access-qjjnp\") pod \"glance-db-create-j5t4z\" (UID: \"152cca8c-e161-488f-b400-ec92a43fd836\") " pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.201522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgmdc\" (UniqueName: \"kubernetes.io/projected/f1e97643-6806-457b-998d-82ddd64ccd99-kube-api-access-xgmdc\") pod \"glance-5efa-account-create-update-6vnhr\" (UID: \"f1e97643-6806-457b-998d-82ddd64ccd99\") " pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.346895 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.347366 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.588314 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3656052-f3d0-4665-9fc7-8236cede743b","Type":"ContainerStarted","Data":"334fb13aea2e8352b44f268dd333d2168d48778f300d4c9f6ed2c407b5fcae2e"} Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.834668 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-j5t4z"] Jan 29 09:24:02 crc kubenswrapper[4771]: I0129 09:24:02.898270 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5efa-account-create-update-6vnhr"] Jan 29 09:24:03 crc kubenswrapper[4771]: W0129 09:24:03.103341 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1e97643_6806_457b_998d_82ddd64ccd99.slice/crio-b5a29beedb2ac7834ae88d9ed22dc0adef33718f47d24305d9e542fffb851fc8 WatchSource:0}: Error finding container b5a29beedb2ac7834ae88d9ed22dc0adef33718f47d24305d9e542fffb851fc8: Status 404 returned error can't find the container with id b5a29beedb2ac7834ae88d9ed22dc0adef33718f47d24305d9e542fffb851fc8 Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.553890 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ctjw4"] Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.555935 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.573690 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.580742 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ctjw4"] Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.597072 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-operator-scripts\") pod \"root-account-create-update-ctjw4\" (UID: \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\") " pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.597145 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdslz\" (UniqueName: \"kubernetes.io/projected/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-kube-api-access-bdslz\") pod \"root-account-create-update-ctjw4\" (UID: \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\") " pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.607555 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j5t4z" event={"ID":"152cca8c-e161-488f-b400-ec92a43fd836","Type":"ContainerStarted","Data":"1bf340c10d80c710bc3952467f5556614618fd58b433b1d283cbcddc16420d1b"} Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.607616 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j5t4z" event={"ID":"152cca8c-e161-488f-b400-ec92a43fd836","Type":"ContainerStarted","Data":"c14a5822fd7f97614b33c7599b9f2f5f8c9b43042d41379f6992532c079aa256"} Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.615332 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5efa-account-create-update-6vnhr" event={"ID":"f1e97643-6806-457b-998d-82ddd64ccd99","Type":"ContainerStarted","Data":"5f604bace21ae6898615fab85ba57ec70171c13bb83ecff5ab6a670d8cd6e087"} Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.615399 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5efa-account-create-update-6vnhr" event={"ID":"f1e97643-6806-457b-998d-82ddd64ccd99","Type":"ContainerStarted","Data":"b5a29beedb2ac7834ae88d9ed22dc0adef33718f47d24305d9e542fffb851fc8"} Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.646186 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-j5t4z" podStartSLOduration=2.646154812 podStartE2EDuration="2.646154812s" podCreationTimestamp="2026-01-29 09:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:24:03.645396801 +0000 UTC m=+1063.768237028" watchObservedRunningTime="2026-01-29 09:24:03.646154812 +0000 UTC m=+1063.768995039" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.678247 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-5efa-account-create-update-6vnhr" podStartSLOduration=2.678219055 podStartE2EDuration="2.678219055s" podCreationTimestamp="2026-01-29 09:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:24:03.677768113 +0000 UTC m=+1063.800608350" watchObservedRunningTime="2026-01-29 09:24:03.678219055 +0000 UTC m=+1063.801059282" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.700706 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdslz\" (UniqueName: \"kubernetes.io/projected/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-kube-api-access-bdslz\") pod \"root-account-create-update-ctjw4\" (UID: \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\") " pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.700891 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-operator-scripts\") pod \"root-account-create-update-ctjw4\" (UID: \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\") " pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.701872 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-operator-scripts\") pod \"root-account-create-update-ctjw4\" (UID: \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\") " pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.727951 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdslz\" (UniqueName: \"kubernetes.io/projected/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-kube-api-access-bdslz\") pod \"root-account-create-update-ctjw4\" (UID: \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\") " pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:03 crc kubenswrapper[4771]: I0129 09:24:03.901135 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.257482 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ctjw4"] Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.673595 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctjw4" event={"ID":"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8","Type":"ContainerStarted","Data":"47a503c09f3bdbfcf388b163122cdd820a75aea01f3465ece74aceb555e11f97"} Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.674009 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctjw4" event={"ID":"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8","Type":"ContainerStarted","Data":"7843ae443d1a0e447e2bbb234df38da7afdcbbe1c00dcc7dc556a65fea44d84e"} Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.677671 4771 generic.go:334] "Generic (PLEG): container finished" podID="152cca8c-e161-488f-b400-ec92a43fd836" containerID="1bf340c10d80c710bc3952467f5556614618fd58b433b1d283cbcddc16420d1b" exitCode=0 Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.677780 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j5t4z" event={"ID":"152cca8c-e161-488f-b400-ec92a43fd836","Type":"ContainerDied","Data":"1bf340c10d80c710bc3952467f5556614618fd58b433b1d283cbcddc16420d1b"} Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.682539 4771 generic.go:334] "Generic (PLEG): container finished" podID="f1e97643-6806-457b-998d-82ddd64ccd99" containerID="5f604bace21ae6898615fab85ba57ec70171c13bb83ecff5ab6a670d8cd6e087" exitCode=0 Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.682727 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5efa-account-create-update-6vnhr" event={"ID":"f1e97643-6806-457b-998d-82ddd64ccd99","Type":"ContainerDied","Data":"5f604bace21ae6898615fab85ba57ec70171c13bb83ecff5ab6a670d8cd6e087"} Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.691388 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3656052-f3d0-4665-9fc7-8236cede743b","Type":"ContainerStarted","Data":"42ed6bc39d8d4632c318bcb4be127d1a92dc0123e1ea8da02bf4a0e93c1a56c9"} Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.691449 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3656052-f3d0-4665-9fc7-8236cede743b","Type":"ContainerStarted","Data":"7a225a94bf97370598bfbea8f543b3676a3cfa35396733c45fcf466466039f49"} Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.695024 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.703563 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ctjw4" podStartSLOduration=1.70353467 podStartE2EDuration="1.70353467s" podCreationTimestamp="2026-01-29 09:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:24:04.694513725 +0000 UTC m=+1064.817353942" watchObservedRunningTime="2026-01-29 09:24:04.70353467 +0000 UTC m=+1064.826374917" Jan 29 09:24:04 crc kubenswrapper[4771]: I0129 09:24:04.756486 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.275110303 podStartE2EDuration="4.756461472s" podCreationTimestamp="2026-01-29 09:24:00 +0000 UTC" firstStartedPulling="2026-01-29 09:24:01.667233786 +0000 UTC m=+1061.790074013" lastFinishedPulling="2026-01-29 09:24:03.148584945 +0000 UTC m=+1063.271425182" observedRunningTime="2026-01-29 09:24:04.751787055 +0000 UTC m=+1064.874627282" watchObservedRunningTime="2026-01-29 09:24:04.756461472 +0000 UTC m=+1064.879301699" Jan 29 09:24:05 crc kubenswrapper[4771]: I0129 09:24:05.701806 4771 generic.go:334] "Generic (PLEG): container finished" podID="511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8" containerID="47a503c09f3bdbfcf388b163122cdd820a75aea01f3465ece74aceb555e11f97" exitCode=0 Jan 29 09:24:05 crc kubenswrapper[4771]: I0129 09:24:05.702097 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctjw4" event={"ID":"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8","Type":"ContainerDied","Data":"47a503c09f3bdbfcf388b163122cdd820a75aea01f3465ece74aceb555e11f97"} Jan 29 09:24:05 crc kubenswrapper[4771]: I0129 09:24:05.762965 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.025859 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.103865 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-hbrs9"] Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.126339 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.154498 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjjnp\" (UniqueName: \"kubernetes.io/projected/152cca8c-e161-488f-b400-ec92a43fd836-kube-api-access-qjjnp\") pod \"152cca8c-e161-488f-b400-ec92a43fd836\" (UID: \"152cca8c-e161-488f-b400-ec92a43fd836\") " Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.154556 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152cca8c-e161-488f-b400-ec92a43fd836-operator-scripts\") pod \"152cca8c-e161-488f-b400-ec92a43fd836\" (UID: \"152cca8c-e161-488f-b400-ec92a43fd836\") " Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.155609 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/152cca8c-e161-488f-b400-ec92a43fd836-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "152cca8c-e161-488f-b400-ec92a43fd836" (UID: "152cca8c-e161-488f-b400-ec92a43fd836"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.187160 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152cca8c-e161-488f-b400-ec92a43fd836-kube-api-access-qjjnp" (OuterVolumeSpecName: "kube-api-access-qjjnp") pod "152cca8c-e161-488f-b400-ec92a43fd836" (UID: "152cca8c-e161-488f-b400-ec92a43fd836"). InnerVolumeSpecName "kube-api-access-qjjnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.247485 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.258080 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjjnp\" (UniqueName: \"kubernetes.io/projected/152cca8c-e161-488f-b400-ec92a43fd836-kube-api-access-qjjnp\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.258148 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152cca8c-e161-488f-b400-ec92a43fd836-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.298245 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ckk6x"] Jan 29 09:24:06 crc kubenswrapper[4771]: E0129 09:24:06.298781 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e97643-6806-457b-998d-82ddd64ccd99" containerName="mariadb-account-create-update" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.298804 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e97643-6806-457b-998d-82ddd64ccd99" containerName="mariadb-account-create-update" Jan 29 09:24:06 crc kubenswrapper[4771]: E0129 09:24:06.298863 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152cca8c-e161-488f-b400-ec92a43fd836" containerName="mariadb-database-create" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.298871 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="152cca8c-e161-488f-b400-ec92a43fd836" containerName="mariadb-database-create" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.299023 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="152cca8c-e161-488f-b400-ec92a43fd836" containerName="mariadb-database-create" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.299041 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e97643-6806-457b-998d-82ddd64ccd99" containerName="mariadb-account-create-update" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.301355 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.330879 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ckk6x"] Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.358918 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1e97643-6806-457b-998d-82ddd64ccd99-operator-scripts\") pod \"f1e97643-6806-457b-998d-82ddd64ccd99\" (UID: \"f1e97643-6806-457b-998d-82ddd64ccd99\") " Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.359073 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgmdc\" (UniqueName: \"kubernetes.io/projected/f1e97643-6806-457b-998d-82ddd64ccd99-kube-api-access-xgmdc\") pod \"f1e97643-6806-457b-998d-82ddd64ccd99\" (UID: \"f1e97643-6806-457b-998d-82ddd64ccd99\") " Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.359363 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm47r\" (UniqueName: \"kubernetes.io/projected/752a764f-52c0-4773-a3fc-8e2a62643f06-kube-api-access-qm47r\") pod \"keystone-db-create-ckk6x\" (UID: \"752a764f-52c0-4773-a3fc-8e2a62643f06\") " pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.359446 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/752a764f-52c0-4773-a3fc-8e2a62643f06-operator-scripts\") pod \"keystone-db-create-ckk6x\" (UID: \"752a764f-52c0-4773-a3fc-8e2a62643f06\") " pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.361146 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e97643-6806-457b-998d-82ddd64ccd99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1e97643-6806-457b-998d-82ddd64ccd99" (UID: "f1e97643-6806-457b-998d-82ddd64ccd99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.375959 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e97643-6806-457b-998d-82ddd64ccd99-kube-api-access-xgmdc" (OuterVolumeSpecName: "kube-api-access-xgmdc") pod "f1e97643-6806-457b-998d-82ddd64ccd99" (UID: "f1e97643-6806-457b-998d-82ddd64ccd99"). InnerVolumeSpecName "kube-api-access-xgmdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.444331 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-61ed-account-create-update-ndvj5"] Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.451139 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.454105 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.469873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/752a764f-52c0-4773-a3fc-8e2a62643f06-operator-scripts\") pod \"keystone-db-create-ckk6x\" (UID: \"752a764f-52c0-4773-a3fc-8e2a62643f06\") " pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.470145 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm47r\" (UniqueName: \"kubernetes.io/projected/752a764f-52c0-4773-a3fc-8e2a62643f06-kube-api-access-qm47r\") pod \"keystone-db-create-ckk6x\" (UID: \"752a764f-52c0-4773-a3fc-8e2a62643f06\") " pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.470271 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1e97643-6806-457b-998d-82ddd64ccd99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.470287 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgmdc\" (UniqueName: \"kubernetes.io/projected/f1e97643-6806-457b-998d-82ddd64ccd99-kube-api-access-xgmdc\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.492637 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/752a764f-52c0-4773-a3fc-8e2a62643f06-operator-scripts\") pod \"keystone-db-create-ckk6x\" (UID: \"752a764f-52c0-4773-a3fc-8e2a62643f06\") " pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.494036 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-61ed-account-create-update-ndvj5"] Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.509477 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm47r\" (UniqueName: \"kubernetes.io/projected/752a764f-52c0-4773-a3fc-8e2a62643f06-kube-api-access-qm47r\") pod \"keystone-db-create-ckk6x\" (UID: \"752a764f-52c0-4773-a3fc-8e2a62643f06\") " pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.573374 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwmn\" (UniqueName: \"kubernetes.io/projected/897e97b1-08ea-4e51-a36f-38727e7eb34e-kube-api-access-htwmn\") pod \"keystone-61ed-account-create-update-ndvj5\" (UID: \"897e97b1-08ea-4e51-a36f-38727e7eb34e\") " pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.573511 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897e97b1-08ea-4e51-a36f-38727e7eb34e-operator-scripts\") pod \"keystone-61ed-account-create-update-ndvj5\" (UID: \"897e97b1-08ea-4e51-a36f-38727e7eb34e\") " pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.619274 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-b7brh"] Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.625403 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b7brh" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.635900 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.660068 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b7brh"] Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.675881 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-operator-scripts\") pod \"placement-db-create-b7brh\" (UID: \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\") " pod="openstack/placement-db-create-b7brh" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.676037 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897e97b1-08ea-4e51-a36f-38727e7eb34e-operator-scripts\") pod \"keystone-61ed-account-create-update-ndvj5\" (UID: \"897e97b1-08ea-4e51-a36f-38727e7eb34e\") " pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.676131 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl79j\" (UniqueName: \"kubernetes.io/projected/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-kube-api-access-jl79j\") pod \"placement-db-create-b7brh\" (UID: \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\") " pod="openstack/placement-db-create-b7brh" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.676190 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htwmn\" (UniqueName: \"kubernetes.io/projected/897e97b1-08ea-4e51-a36f-38727e7eb34e-kube-api-access-htwmn\") pod \"keystone-61ed-account-create-update-ndvj5\" (UID: \"897e97b1-08ea-4e51-a36f-38727e7eb34e\") " pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.677634 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897e97b1-08ea-4e51-a36f-38727e7eb34e-operator-scripts\") pod \"keystone-61ed-account-create-update-ndvj5\" (UID: \"897e97b1-08ea-4e51-a36f-38727e7eb34e\") " pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.708160 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwmn\" (UniqueName: \"kubernetes.io/projected/897e97b1-08ea-4e51-a36f-38727e7eb34e-kube-api-access-htwmn\") pod \"keystone-61ed-account-create-update-ndvj5\" (UID: \"897e97b1-08ea-4e51-a36f-38727e7eb34e\") " pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.740171 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5efa-account-create-update-6vnhr" event={"ID":"f1e97643-6806-457b-998d-82ddd64ccd99","Type":"ContainerDied","Data":"b5a29beedb2ac7834ae88d9ed22dc0adef33718f47d24305d9e542fffb851fc8"} Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.740442 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a29beedb2ac7834ae88d9ed22dc0adef33718f47d24305d9e542fffb851fc8" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.740523 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5efa-account-create-update-6vnhr" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.742592 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j5t4z" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.742740 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j5t4z" event={"ID":"152cca8c-e161-488f-b400-ec92a43fd836","Type":"ContainerDied","Data":"c14a5822fd7f97614b33c7599b9f2f5f8c9b43042d41379f6992532c079aa256"} Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.742823 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c14a5822fd7f97614b33c7599b9f2f5f8c9b43042d41379f6992532c079aa256" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.743155 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" podUID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" containerName="dnsmasq-dns" containerID="cri-o://71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04" gracePeriod=10 Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.777932 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-operator-scripts\") pod \"placement-db-create-b7brh\" (UID: \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\") " pod="openstack/placement-db-create-b7brh" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.778140 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl79j\" (UniqueName: \"kubernetes.io/projected/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-kube-api-access-jl79j\") pod \"placement-db-create-b7brh\" (UID: \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\") " pod="openstack/placement-db-create-b7brh" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.779141 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-operator-scripts\") pod \"placement-db-create-b7brh\" (UID: \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\") " pod="openstack/placement-db-create-b7brh" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.783169 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-14ea-account-create-update-89b88"] Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.784519 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.787398 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.797642 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-14ea-account-create-update-89b88"] Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.810774 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.813960 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl79j\" (UniqueName: \"kubernetes.io/projected/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-kube-api-access-jl79j\") pod \"placement-db-create-b7brh\" (UID: \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\") " pod="openstack/placement-db-create-b7brh" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.882769 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4ld\" (UniqueName: \"kubernetes.io/projected/c8ec8a48-5800-43a5-ae42-1f2a2309463d-kube-api-access-bw4ld\") pod \"placement-14ea-account-create-update-89b88\" (UID: \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\") " pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.882868 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec8a48-5800-43a5-ae42-1f2a2309463d-operator-scripts\") pod \"placement-14ea-account-create-update-89b88\" (UID: \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\") " pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.946640 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b7brh" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.984861 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw4ld\" (UniqueName: \"kubernetes.io/projected/c8ec8a48-5800-43a5-ae42-1f2a2309463d-kube-api-access-bw4ld\") pod \"placement-14ea-account-create-update-89b88\" (UID: \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\") " pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.984923 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec8a48-5800-43a5-ae42-1f2a2309463d-operator-scripts\") pod \"placement-14ea-account-create-update-89b88\" (UID: \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\") " pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:06 crc kubenswrapper[4771]: I0129 09:24:06.985858 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec8a48-5800-43a5-ae42-1f2a2309463d-operator-scripts\") pod \"placement-14ea-account-create-update-89b88\" (UID: \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\") " pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.022129 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw4ld\" (UniqueName: \"kubernetes.io/projected/c8ec8a48-5800-43a5-ae42-1f2a2309463d-kube-api-access-bw4ld\") pod \"placement-14ea-account-create-update-89b88\" (UID: \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\") " pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.147209 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.188794 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ckk6x"] Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.237088 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wl28d"] Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.238507 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.254961 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x86l6" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.255982 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.277274 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wl28d"] Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.294319 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-combined-ca-bundle\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.294689 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-db-sync-config-data\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.294792 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgt45\" (UniqueName: \"kubernetes.io/projected/2a9c1def-826f-4029-94c3-5670ce333c66-kube-api-access-xgt45\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.294819 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-config-data\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.376673 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.398075 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-operator-scripts\") pod \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\" (UID: \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\") " Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.398233 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdslz\" (UniqueName: \"kubernetes.io/projected/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-kube-api-access-bdslz\") pod \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\" (UID: \"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8\") " Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.398819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgt45\" (UniqueName: \"kubernetes.io/projected/2a9c1def-826f-4029-94c3-5670ce333c66-kube-api-access-xgt45\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.398867 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-config-data\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.398991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-combined-ca-bundle\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.399055 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-db-sync-config-data\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.403083 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8" (UID: "511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.412687 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-config-data\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.415311 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-kube-api-access-bdslz" (OuterVolumeSpecName: "kube-api-access-bdslz") pod "511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8" (UID: "511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8"). InnerVolumeSpecName "kube-api-access-bdslz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.425450 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-combined-ca-bundle\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.428648 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgt45\" (UniqueName: \"kubernetes.io/projected/2a9c1def-826f-4029-94c3-5670ce333c66-kube-api-access-xgt45\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.434686 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-db-sync-config-data\") pod \"glance-db-sync-wl28d\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.501279 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.501327 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdslz\" (UniqueName: \"kubernetes.io/projected/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8-kube-api-access-bdslz\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.534975 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-61ed-account-create-update-ndvj5"] Jan 29 09:24:07 crc kubenswrapper[4771]: W0129 09:24:07.562951 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod897e97b1_08ea_4e51_a36f_38727e7eb34e.slice/crio-606f56126de309b9361616b0921c02063829303cf85c621c8f51f7493e7dfd99 WatchSource:0}: Error finding container 606f56126de309b9361616b0921c02063829303cf85c621c8f51f7493e7dfd99: Status 404 returned error can't find the container with id 606f56126de309b9361616b0921c02063829303cf85c621c8f51f7493e7dfd99 Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.583451 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.669390 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b7brh"] Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.712390 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.769510 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-61ed-account-create-update-ndvj5" event={"ID":"897e97b1-08ea-4e51-a36f-38727e7eb34e","Type":"ContainerStarted","Data":"606f56126de309b9361616b0921c02063829303cf85c621c8f51f7493e7dfd99"} Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.775288 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ckk6x" event={"ID":"752a764f-52c0-4773-a3fc-8e2a62643f06","Type":"ContainerStarted","Data":"d4fd7bb63da756ce532612776b13badad790c419c6b56374ef3390554e63128c"} Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.799795 4771 generic.go:334] "Generic (PLEG): container finished" podID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" containerID="71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04" exitCode=0 Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.799903 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" event={"ID":"6c38cd72-7f98-42a2-b5f4-abf5f9125c90","Type":"ContainerDied","Data":"71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04"} Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.799943 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" event={"ID":"6c38cd72-7f98-42a2-b5f4-abf5f9125c90","Type":"ContainerDied","Data":"020c708cda82d821839514b47c456b9aadf2700c6293c056649f6fb9c594b21d"} Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.799962 4771 scope.go:117] "RemoveContainer" containerID="71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.800130 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-hbrs9" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.808253 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-config\") pod \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.808290 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpjtp\" (UniqueName: \"kubernetes.io/projected/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-kube-api-access-rpjtp\") pod \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.808358 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-dns-svc\") pod \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.808505 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-ovsdbserver-sb\") pod \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\" (UID: \"6c38cd72-7f98-42a2-b5f4-abf5f9125c90\") " Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.809183 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b7brh" event={"ID":"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8","Type":"ContainerStarted","Data":"fd25fd40604c7cce2089fbac42e752afed437a2738d4e9d0ded0ca3e1daa44cd"} Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.814760 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctjw4" event={"ID":"511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8","Type":"ContainerDied","Data":"7843ae443d1a0e447e2bbb234df38da7afdcbbe1c00dcc7dc556a65fea44d84e"} Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.814791 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7843ae443d1a0e447e2bbb234df38da7afdcbbe1c00dcc7dc556a65fea44d84e" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.814847 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctjw4" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.816993 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-kube-api-access-rpjtp" (OuterVolumeSpecName: "kube-api-access-rpjtp") pod "6c38cd72-7f98-42a2-b5f4-abf5f9125c90" (UID: "6c38cd72-7f98-42a2-b5f4-abf5f9125c90"). InnerVolumeSpecName "kube-api-access-rpjtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.853329 4771 scope.go:117] "RemoveContainer" containerID="634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.875255 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-config" (OuterVolumeSpecName: "config") pod "6c38cd72-7f98-42a2-b5f4-abf5f9125c90" (UID: "6c38cd72-7f98-42a2-b5f4-abf5f9125c90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.894869 4771 scope.go:117] "RemoveContainer" containerID="71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.902127 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c38cd72-7f98-42a2-b5f4-abf5f9125c90" (UID: "6c38cd72-7f98-42a2-b5f4-abf5f9125c90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:07 crc kubenswrapper[4771]: E0129 09:24:07.902434 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04\": container with ID starting with 71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04 not found: ID does not exist" containerID="71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.903335 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04"} err="failed to get container status \"71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04\": rpc error: code = NotFound desc = could not find container \"71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04\": container with ID starting with 71f006b211aa8f7259ed1142fab2de00b974b6b57d403f29f23a9acc6b214f04 not found: ID does not exist" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.903395 4771 scope.go:117] "RemoveContainer" containerID="634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.907975 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c38cd72-7f98-42a2-b5f4-abf5f9125c90" (UID: "6c38cd72-7f98-42a2-b5f4-abf5f9125c90"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.909409 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.909432 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.909442 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpjtp\" (UniqueName: \"kubernetes.io/projected/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-kube-api-access-rpjtp\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.909451 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c38cd72-7f98-42a2-b5f4-abf5f9125c90-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:07 crc kubenswrapper[4771]: E0129 09:24:07.911628 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2\": container with ID starting with 634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2 not found: ID does not exist" containerID="634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.911654 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2"} err="failed to get container status \"634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2\": rpc error: code = NotFound desc = could not find container \"634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2\": container with ID starting with 634b0497e7c35cd643a210f33e7ebb930b366b0debb5b807078c7cd23fc3bcb2 not found: ID does not exist" Jan 29 09:24:07 crc kubenswrapper[4771]: I0129 09:24:07.929012 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-14ea-account-create-update-89b88"] Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.135070 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-hbrs9"] Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.143996 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-hbrs9"] Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.326770 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wl28d"] Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.558353 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.578655 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9hc6l"] Jan 29 09:24:08 crc kubenswrapper[4771]: E0129 09:24:08.579077 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" containerName="init" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.579090 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" containerName="init" Jan 29 09:24:08 crc kubenswrapper[4771]: E0129 09:24:08.579138 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" containerName="dnsmasq-dns" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.579145 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" containerName="dnsmasq-dns" Jan 29 09:24:08 crc kubenswrapper[4771]: E0129 09:24:08.579155 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8" containerName="mariadb-account-create-update" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.579163 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8" containerName="mariadb-account-create-update" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.579334 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" containerName="dnsmasq-dns" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.579358 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8" containerName="mariadb-account-create-update" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.580278 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.615268 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9hc6l"] Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.628053 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.628121 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.628150 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.628251 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fhb\" (UniqueName: \"kubernetes.io/projected/2505a610-4ed2-406a-9215-7e8a23df996d-kube-api-access-49fhb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.628292 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-config\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.729936 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.730005 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.730024 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.730084 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49fhb\" (UniqueName: \"kubernetes.io/projected/2505a610-4ed2-406a-9215-7e8a23df996d-kube-api-access-49fhb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.730115 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-config\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.731011 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-config\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.731568 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.731833 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.731916 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.771963 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49fhb\" (UniqueName: \"kubernetes.io/projected/2505a610-4ed2-406a-9215-7e8a23df996d-kube-api-access-49fhb\") pod \"dnsmasq-dns-b8fbc5445-9hc6l\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.832230 4771 generic.go:334] "Generic (PLEG): container finished" podID="c8ec8a48-5800-43a5-ae42-1f2a2309463d" containerID="66fa316576c918d465f02bc66ba86fccdf2935ff17bb14c3491cbf3c2d0321c4" exitCode=0 Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.832336 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-14ea-account-create-update-89b88" event={"ID":"c8ec8a48-5800-43a5-ae42-1f2a2309463d","Type":"ContainerDied","Data":"66fa316576c918d465f02bc66ba86fccdf2935ff17bb14c3491cbf3c2d0321c4"} Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.832366 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-14ea-account-create-update-89b88" event={"ID":"c8ec8a48-5800-43a5-ae42-1f2a2309463d","Type":"ContainerStarted","Data":"64b34559af2d33d5ab805b4b790398b583ff928e6e28818625ccc86f94269286"} Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.839265 4771 generic.go:334] "Generic (PLEG): container finished" podID="897e97b1-08ea-4e51-a36f-38727e7eb34e" containerID="62bce91722b6b2610db07fc46f1476a4223ff84f20270fda73f87b4378b18b24" exitCode=0 Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.849118 4771 generic.go:334] "Generic (PLEG): container finished" podID="752a764f-52c0-4773-a3fc-8e2a62643f06" containerID="b0195e31a697d5a99d9579d22737a1b233d8aaf64df5a0f1cfa3118ca9ac3a62" exitCode=0 Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.851049 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c38cd72-7f98-42a2-b5f4-abf5f9125c90" path="/var/lib/kubelet/pods/6c38cd72-7f98-42a2-b5f4-abf5f9125c90/volumes" Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.851853 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-61ed-account-create-update-ndvj5" event={"ID":"897e97b1-08ea-4e51-a36f-38727e7eb34e","Type":"ContainerDied","Data":"62bce91722b6b2610db07fc46f1476a4223ff84f20270fda73f87b4378b18b24"} Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.851893 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wl28d" event={"ID":"2a9c1def-826f-4029-94c3-5670ce333c66","Type":"ContainerStarted","Data":"b22c45ace1d50986213ba0432464963044cd57e1d13cd38f33d541ab48c4ba68"} Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.851909 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ckk6x" event={"ID":"752a764f-52c0-4773-a3fc-8e2a62643f06","Type":"ContainerDied","Data":"b0195e31a697d5a99d9579d22737a1b233d8aaf64df5a0f1cfa3118ca9ac3a62"} Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.855654 4771 generic.go:334] "Generic (PLEG): container finished" podID="4d0d75c6-5a56-4a42-a335-bb7ba669b7f8" containerID="c4ca2f95bf39a490caba3f0bbcbf7fc838e377f4d6bcce3978d353318f561795" exitCode=0 Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.855725 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b7brh" event={"ID":"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8","Type":"ContainerDied","Data":"c4ca2f95bf39a490caba3f0bbcbf7fc838e377f4d6bcce3978d353318f561795"} Jan 29 09:24:08 crc kubenswrapper[4771]: I0129 09:24:08.960607 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.559637 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9hc6l"] Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.757333 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.764010 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.766318 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.766564 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.766721 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hppgc" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.766910 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.768527 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.870207 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" event={"ID":"2505a610-4ed2-406a-9215-7e8a23df996d","Type":"ContainerStarted","Data":"32b9c6ae277fb825fd3d78d0a81f69d4eca9182de7841afe16a7ddffba255e9d"} Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.870248 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" event={"ID":"2505a610-4ed2-406a-9215-7e8a23df996d","Type":"ContainerStarted","Data":"f79d5777329718ef20dc33c62cb2bacb4bd346bd0dbc60cc3a0e7270c47aadfb"} Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.970115 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.970260 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.970443 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.970517 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9zq\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-kube-api-access-4d9zq\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.970562 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-lock\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:09 crc kubenswrapper[4771]: I0129 09:24:09.970599 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-cache\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.072360 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.072847 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9zq\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-kube-api-access-4d9zq\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.072968 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-lock\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.073014 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-cache\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.073059 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.073130 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: E0129 09:24:10.073302 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:24:10 crc kubenswrapper[4771]: E0129 09:24:10.073322 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:24:10 crc kubenswrapper[4771]: E0129 09:24:10.073377 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift podName:e6ce7b26-bcc5-4306-ab2c-5691cceeb18f nodeName:}" failed. No retries permitted until 2026-01-29 09:24:10.573360131 +0000 UTC m=+1070.696200358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift") pod "swift-storage-0" (UID: "e6ce7b26-bcc5-4306-ab2c-5691cceeb18f") : configmap "swift-ring-files" not found Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.073526 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-lock\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.073681 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.073931 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-cache\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.080151 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.096732 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9zq\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-kube-api-access-4d9zq\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.137952 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ctjw4"] Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.147003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.155311 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ctjw4"] Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.261143 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.381542 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897e97b1-08ea-4e51-a36f-38727e7eb34e-operator-scripts\") pod \"897e97b1-08ea-4e51-a36f-38727e7eb34e\" (UID: \"897e97b1-08ea-4e51-a36f-38727e7eb34e\") " Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.381849 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htwmn\" (UniqueName: \"kubernetes.io/projected/897e97b1-08ea-4e51-a36f-38727e7eb34e-kube-api-access-htwmn\") pod \"897e97b1-08ea-4e51-a36f-38727e7eb34e\" (UID: \"897e97b1-08ea-4e51-a36f-38727e7eb34e\") " Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.382738 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897e97b1-08ea-4e51-a36f-38727e7eb34e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "897e97b1-08ea-4e51-a36f-38727e7eb34e" (UID: "897e97b1-08ea-4e51-a36f-38727e7eb34e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.413145 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897e97b1-08ea-4e51-a36f-38727e7eb34e-kube-api-access-htwmn" (OuterVolumeSpecName: "kube-api-access-htwmn") pod "897e97b1-08ea-4e51-a36f-38727e7eb34e" (UID: "897e97b1-08ea-4e51-a36f-38727e7eb34e"). InnerVolumeSpecName "kube-api-access-htwmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.485364 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htwmn\" (UniqueName: \"kubernetes.io/projected/897e97b1-08ea-4e51-a36f-38727e7eb34e-kube-api-access-htwmn\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.485398 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897e97b1-08ea-4e51-a36f-38727e7eb34e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.564854 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.587533 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:10 crc kubenswrapper[4771]: E0129 09:24:10.587773 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:24:10 crc kubenswrapper[4771]: E0129 09:24:10.587789 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:24:10 crc kubenswrapper[4771]: E0129 09:24:10.587830 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift podName:e6ce7b26-bcc5-4306-ab2c-5691cceeb18f nodeName:}" failed. No retries permitted until 2026-01-29 09:24:11.587814347 +0000 UTC m=+1071.710654574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift") pod "swift-storage-0" (UID: "e6ce7b26-bcc5-4306-ab2c-5691cceeb18f") : configmap "swift-ring-files" not found Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.590640 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.600147 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b7brh" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.689143 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/752a764f-52c0-4773-a3fc-8e2a62643f06-operator-scripts\") pod \"752a764f-52c0-4773-a3fc-8e2a62643f06\" (UID: \"752a764f-52c0-4773-a3fc-8e2a62643f06\") " Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.689399 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm47r\" (UniqueName: \"kubernetes.io/projected/752a764f-52c0-4773-a3fc-8e2a62643f06-kube-api-access-qm47r\") pod \"752a764f-52c0-4773-a3fc-8e2a62643f06\" (UID: \"752a764f-52c0-4773-a3fc-8e2a62643f06\") " Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.690112 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752a764f-52c0-4773-a3fc-8e2a62643f06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "752a764f-52c0-4773-a3fc-8e2a62643f06" (UID: "752a764f-52c0-4773-a3fc-8e2a62643f06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.694882 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752a764f-52c0-4773-a3fc-8e2a62643f06-kube-api-access-qm47r" (OuterVolumeSpecName: "kube-api-access-qm47r") pod "752a764f-52c0-4773-a3fc-8e2a62643f06" (UID: "752a764f-52c0-4773-a3fc-8e2a62643f06"). InnerVolumeSpecName "kube-api-access-qm47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.791328 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-operator-scripts\") pod \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\" (UID: \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\") " Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.791395 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl79j\" (UniqueName: \"kubernetes.io/projected/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-kube-api-access-jl79j\") pod \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\" (UID: \"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8\") " Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.791422 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec8a48-5800-43a5-ae42-1f2a2309463d-operator-scripts\") pod \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\" (UID: \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\") " Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.791498 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw4ld\" (UniqueName: \"kubernetes.io/projected/c8ec8a48-5800-43a5-ae42-1f2a2309463d-kube-api-access-bw4ld\") pod \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\" (UID: \"c8ec8a48-5800-43a5-ae42-1f2a2309463d\") " Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.791890 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d0d75c6-5a56-4a42-a335-bb7ba669b7f8" (UID: "4d0d75c6-5a56-4a42-a335-bb7ba669b7f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.791985 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/752a764f-52c0-4773-a3fc-8e2a62643f06-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.792005 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm47r\" (UniqueName: \"kubernetes.io/projected/752a764f-52c0-4773-a3fc-8e2a62643f06-kube-api-access-qm47r\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.792504 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ec8a48-5800-43a5-ae42-1f2a2309463d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8ec8a48-5800-43a5-ae42-1f2a2309463d" (UID: "c8ec8a48-5800-43a5-ae42-1f2a2309463d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.796232 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ec8a48-5800-43a5-ae42-1f2a2309463d-kube-api-access-bw4ld" (OuterVolumeSpecName: "kube-api-access-bw4ld") pod "c8ec8a48-5800-43a5-ae42-1f2a2309463d" (UID: "c8ec8a48-5800-43a5-ae42-1f2a2309463d"). InnerVolumeSpecName "kube-api-access-bw4ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.798274 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-kube-api-access-jl79j" (OuterVolumeSpecName: "kube-api-access-jl79j") pod "4d0d75c6-5a56-4a42-a335-bb7ba669b7f8" (UID: "4d0d75c6-5a56-4a42-a335-bb7ba669b7f8"). InnerVolumeSpecName "kube-api-access-jl79j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.850415 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8" path="/var/lib/kubelet/pods/511abc3e-d6c4-462c-bdeb-06f6f9e4c1b8/volumes" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.890272 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b7brh" event={"ID":"4d0d75c6-5a56-4a42-a335-bb7ba669b7f8","Type":"ContainerDied","Data":"fd25fd40604c7cce2089fbac42e752afed437a2738d4e9d0ded0ca3e1daa44cd"} Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.890483 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd25fd40604c7cce2089fbac42e752afed437a2738d4e9d0ded0ca3e1daa44cd" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.890297 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b7brh" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.892256 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-14ea-account-create-update-89b88" event={"ID":"c8ec8a48-5800-43a5-ae42-1f2a2309463d","Type":"ContainerDied","Data":"64b34559af2d33d5ab805b4b790398b583ff928e6e28818625ccc86f94269286"} Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.892309 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b34559af2d33d5ab805b4b790398b583ff928e6e28818625ccc86f94269286" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.892272 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-14ea-account-create-update-89b88" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.893163 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.893193 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl79j\" (UniqueName: \"kubernetes.io/projected/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8-kube-api-access-jl79j\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.893211 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8ec8a48-5800-43a5-ae42-1f2a2309463d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.893224 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw4ld\" (UniqueName: \"kubernetes.io/projected/c8ec8a48-5800-43a5-ae42-1f2a2309463d-kube-api-access-bw4ld\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.895226 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-61ed-account-create-update-ndvj5" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.895227 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-61ed-account-create-update-ndvj5" event={"ID":"897e97b1-08ea-4e51-a36f-38727e7eb34e","Type":"ContainerDied","Data":"606f56126de309b9361616b0921c02063829303cf85c621c8f51f7493e7dfd99"} Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.895334 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="606f56126de309b9361616b0921c02063829303cf85c621c8f51f7493e7dfd99" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.896885 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ckk6x" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.896878 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ckk6x" event={"ID":"752a764f-52c0-4773-a3fc-8e2a62643f06","Type":"ContainerDied","Data":"d4fd7bb63da756ce532612776b13badad790c419c6b56374ef3390554e63128c"} Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.897144 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4fd7bb63da756ce532612776b13badad790c419c6b56374ef3390554e63128c" Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.897930 4771 generic.go:334] "Generic (PLEG): container finished" podID="2505a610-4ed2-406a-9215-7e8a23df996d" containerID="32b9c6ae277fb825fd3d78d0a81f69d4eca9182de7841afe16a7ddffba255e9d" exitCode=0 Jan 29 09:24:10 crc kubenswrapper[4771]: I0129 09:24:10.898000 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" event={"ID":"2505a610-4ed2-406a-9215-7e8a23df996d","Type":"ContainerDied","Data":"32b9c6ae277fb825fd3d78d0a81f69d4eca9182de7841afe16a7ddffba255e9d"} Jan 29 09:24:11 crc kubenswrapper[4771]: I0129 09:24:11.606729 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:11 crc kubenswrapper[4771]: E0129 09:24:11.607277 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:24:11 crc kubenswrapper[4771]: E0129 09:24:11.607291 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:24:11 crc kubenswrapper[4771]: E0129 09:24:11.607340 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift podName:e6ce7b26-bcc5-4306-ab2c-5691cceeb18f nodeName:}" failed. No retries permitted until 2026-01-29 09:24:13.607324884 +0000 UTC m=+1073.730165111 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift") pod "swift-storage-0" (UID: "e6ce7b26-bcc5-4306-ab2c-5691cceeb18f") : configmap "swift-ring-files" not found Jan 29 09:24:11 crc kubenswrapper[4771]: I0129 09:24:11.910086 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" event={"ID":"2505a610-4ed2-406a-9215-7e8a23df996d","Type":"ContainerStarted","Data":"c344d6f90b22c61b8e5c1a0e39a83f45b277933c0618a0431a31c5628c9a4a41"} Jan 29 09:24:11 crc kubenswrapper[4771]: I0129 09:24:11.910318 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:11 crc kubenswrapper[4771]: I0129 09:24:11.948195 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" podStartSLOduration=3.948157751 podStartE2EDuration="3.948157751s" podCreationTimestamp="2026-01-29 09:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:24:11.931223959 +0000 UTC m=+1072.054064186" watchObservedRunningTime="2026-01-29 09:24:11.948157751 +0000 UTC m=+1072.070997988" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.648345 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:13 crc kubenswrapper[4771]: E0129 09:24:13.649222 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:24:13 crc kubenswrapper[4771]: E0129 09:24:13.649249 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:24:13 crc kubenswrapper[4771]: E0129 09:24:13.649319 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift podName:e6ce7b26-bcc5-4306-ab2c-5691cceeb18f nodeName:}" failed. No retries permitted until 2026-01-29 09:24:17.649293527 +0000 UTC m=+1077.772133754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift") pod "swift-storage-0" (UID: "e6ce7b26-bcc5-4306-ab2c-5691cceeb18f") : configmap "swift-ring-files" not found Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.675273 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-snzfb"] Jan 29 09:24:13 crc kubenswrapper[4771]: E0129 09:24:13.675733 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897e97b1-08ea-4e51-a36f-38727e7eb34e" containerName="mariadb-account-create-update" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.675754 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="897e97b1-08ea-4e51-a36f-38727e7eb34e" containerName="mariadb-account-create-update" Jan 29 09:24:13 crc kubenswrapper[4771]: E0129 09:24:13.675794 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0d75c6-5a56-4a42-a335-bb7ba669b7f8" containerName="mariadb-database-create" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.675803 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0d75c6-5a56-4a42-a335-bb7ba669b7f8" containerName="mariadb-database-create" Jan 29 09:24:13 crc kubenswrapper[4771]: E0129 09:24:13.675823 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752a764f-52c0-4773-a3fc-8e2a62643f06" containerName="mariadb-database-create" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.675831 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="752a764f-52c0-4773-a3fc-8e2a62643f06" containerName="mariadb-database-create" Jan 29 09:24:13 crc kubenswrapper[4771]: E0129 09:24:13.675845 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ec8a48-5800-43a5-ae42-1f2a2309463d" containerName="mariadb-account-create-update" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.675855 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ec8a48-5800-43a5-ae42-1f2a2309463d" containerName="mariadb-account-create-update" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.676075 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="897e97b1-08ea-4e51-a36f-38727e7eb34e" containerName="mariadb-account-create-update" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.676101 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="752a764f-52c0-4773-a3fc-8e2a62643f06" containerName="mariadb-database-create" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.676112 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0d75c6-5a56-4a42-a335-bb7ba669b7f8" containerName="mariadb-database-create" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.676125 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ec8a48-5800-43a5-ae42-1f2a2309463d" containerName="mariadb-account-create-update" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.676883 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.678911 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.680355 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.682246 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.689314 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-snzfb"] Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.853139 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-scripts\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.853189 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-ring-data-devices\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.853268 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-swiftconf\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.853376 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzbcm\" (UniqueName: \"kubernetes.io/projected/543a7e6c-ab47-4720-b5f0-6b0800904d36-kube-api-access-mzbcm\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.853432 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-dispersionconf\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.853503 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-combined-ca-bundle\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.853648 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/543a7e6c-ab47-4720-b5f0-6b0800904d36-etc-swift\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.956269 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-dispersionconf\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.956367 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-combined-ca-bundle\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.956393 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/543a7e6c-ab47-4720-b5f0-6b0800904d36-etc-swift\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.956462 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-ring-data-devices\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.956486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-scripts\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.956590 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-swiftconf\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.956651 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzbcm\" (UniqueName: \"kubernetes.io/projected/543a7e6c-ab47-4720-b5f0-6b0800904d36-kube-api-access-mzbcm\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.958560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-ring-data-devices\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.958635 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-scripts\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.959590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/543a7e6c-ab47-4720-b5f0-6b0800904d36-etc-swift\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.968553 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-swiftconf\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.969840 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-combined-ca-bundle\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.979207 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-dispersionconf\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:13 crc kubenswrapper[4771]: I0129 09:24:13.986352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzbcm\" (UniqueName: \"kubernetes.io/projected/543a7e6c-ab47-4720-b5f0-6b0800904d36-kube-api-access-mzbcm\") pod \"swift-ring-rebalance-snzfb\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:14 crc kubenswrapper[4771]: I0129 09:24:14.019290 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:14 crc kubenswrapper[4771]: I0129 09:24:14.516409 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-snzfb"] Jan 29 09:24:14 crc kubenswrapper[4771]: I0129 09:24:14.945987 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-snzfb" event={"ID":"543a7e6c-ab47-4720-b5f0-6b0800904d36","Type":"ContainerStarted","Data":"1b458285a354ca0cba7373b2e1eefd1c5ea0d7669206302ef4bcfce52ccf3526"} Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.137430 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lsfr8"] Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.139143 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.141798 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.153418 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lsfr8"] Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.285112 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3997d1-0e52-4674-b604-10b057141b3a-operator-scripts\") pod \"root-account-create-update-lsfr8\" (UID: \"0d3997d1-0e52-4674-b604-10b057141b3a\") " pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.285234 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnqbt\" (UniqueName: \"kubernetes.io/projected/0d3997d1-0e52-4674-b604-10b057141b3a-kube-api-access-gnqbt\") pod \"root-account-create-update-lsfr8\" (UID: \"0d3997d1-0e52-4674-b604-10b057141b3a\") " pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.387973 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3997d1-0e52-4674-b604-10b057141b3a-operator-scripts\") pod \"root-account-create-update-lsfr8\" (UID: \"0d3997d1-0e52-4674-b604-10b057141b3a\") " pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.389665 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnqbt\" (UniqueName: \"kubernetes.io/projected/0d3997d1-0e52-4674-b604-10b057141b3a-kube-api-access-gnqbt\") pod \"root-account-create-update-lsfr8\" (UID: \"0d3997d1-0e52-4674-b604-10b057141b3a\") " pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.389432 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3997d1-0e52-4674-b604-10b057141b3a-operator-scripts\") pod \"root-account-create-update-lsfr8\" (UID: \"0d3997d1-0e52-4674-b604-10b057141b3a\") " pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.411674 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnqbt\" (UniqueName: \"kubernetes.io/projected/0d3997d1-0e52-4674-b604-10b057141b3a-kube-api-access-gnqbt\") pod \"root-account-create-update-lsfr8\" (UID: \"0d3997d1-0e52-4674-b604-10b057141b3a\") " pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:15 crc kubenswrapper[4771]: I0129 09:24:15.464493 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:17 crc kubenswrapper[4771]: I0129 09:24:17.735805 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:17 crc kubenswrapper[4771]: E0129 09:24:17.736049 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:24:17 crc kubenswrapper[4771]: E0129 09:24:17.736332 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:24:17 crc kubenswrapper[4771]: E0129 09:24:17.736412 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift podName:e6ce7b26-bcc5-4306-ab2c-5691cceeb18f nodeName:}" failed. No retries permitted until 2026-01-29 09:24:25.736390631 +0000 UTC m=+1085.859230858 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift") pod "swift-storage-0" (UID: "e6ce7b26-bcc5-4306-ab2c-5691cceeb18f") : configmap "swift-ring-files" not found Jan 29 09:24:18 crc kubenswrapper[4771]: I0129 09:24:18.962889 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:24:19 crc kubenswrapper[4771]: I0129 09:24:19.032555 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gj54q"] Jan 29 09:24:19 crc kubenswrapper[4771]: I0129 09:24:19.032909 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-gj54q" podUID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerName="dnsmasq-dns" containerID="cri-o://dbdfeb7bb5216528b89fbd99bd7109b88474c7686c70a92fc061c5e68f142061" gracePeriod=10 Jan 29 09:24:21 crc kubenswrapper[4771]: I0129 09:24:21.025406 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-gj54q" podUID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Jan 29 09:24:21 crc kubenswrapper[4771]: I0129 09:24:21.043226 4771 generic.go:334] "Generic (PLEG): container finished" podID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerID="dbdfeb7bb5216528b89fbd99bd7109b88474c7686c70a92fc061c5e68f142061" exitCode=0 Jan 29 09:24:21 crc kubenswrapper[4771]: I0129 09:24:21.043273 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gj54q" event={"ID":"63eb7921-71e0-4fa1-ba56-085f331e8a41","Type":"ContainerDied","Data":"dbdfeb7bb5216528b89fbd99bd7109b88474c7686c70a92fc061c5e68f142061"} Jan 29 09:24:21 crc kubenswrapper[4771]: I0129 09:24:21.185030 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.482540 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.483980 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgt45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-wl28d_openstack(2a9c1def-826f-4029-94c3-5670ce333c66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.485396 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-wl28d" podUID="2a9c1def-826f-4029-94c3-5670ce333c66" Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.535241 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:16613028b9148ab9c2e35eb32649815f6037bf305586bb52d695c5f6e798e119: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-swift-proxy-server/blobs/sha256:16613028b9148ab9c2e35eb32649815f6037bf305586bb52d695c5f6e798e119\": context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.535402 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:d6e08831-9d9a-447e-8c77-916f55819b78,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzbcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-snzfb_openstack(543a7e6c-ab47-4720-b5f0-6b0800904d36): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:16613028b9148ab9c2e35eb32649815f6037bf305586bb52d695c5f6e798e119: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-swift-proxy-server/blobs/sha256:16613028b9148ab9c2e35eb32649815f6037bf305586bb52d695c5f6e798e119\": context canceled" logger="UnhandledError" Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.536804 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:16613028b9148ab9c2e35eb32649815f6037bf305586bb52d695c5f6e798e119: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-swift-proxy-server/blobs/sha256:16613028b9148ab9c2e35eb32649815f6037bf305586bb52d695c5f6e798e119\\\": context canceled\"" pod="openstack/swift-ring-rebalance-snzfb" podUID="543a7e6c-ab47-4720-b5f0-6b0800904d36" Jan 29 09:24:25 crc kubenswrapper[4771]: I0129 09:24:25.833062 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.833280 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.833308 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:24:25 crc kubenswrapper[4771]: E0129 09:24:25.833359 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift podName:e6ce7b26-bcc5-4306-ab2c-5691cceeb18f nodeName:}" failed. No retries permitted until 2026-01-29 09:24:41.833342672 +0000 UTC m=+1101.956182899 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift") pod "swift-storage-0" (UID: "e6ce7b26-bcc5-4306-ab2c-5691cceeb18f") : configmap "swift-ring-files" not found Jan 29 09:24:25 crc kubenswrapper[4771]: I0129 09:24:25.971444 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.038866 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-nb\") pod \"63eb7921-71e0-4fa1-ba56-085f331e8a41\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.039035 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxtwg\" (UniqueName: \"kubernetes.io/projected/63eb7921-71e0-4fa1-ba56-085f331e8a41-kube-api-access-kxtwg\") pod \"63eb7921-71e0-4fa1-ba56-085f331e8a41\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.039175 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-sb\") pod \"63eb7921-71e0-4fa1-ba56-085f331e8a41\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.039212 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-config\") pod \"63eb7921-71e0-4fa1-ba56-085f331e8a41\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.039341 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-dns-svc\") pod \"63eb7921-71e0-4fa1-ba56-085f331e8a41\" (UID: \"63eb7921-71e0-4fa1-ba56-085f331e8a41\") " Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.045345 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63eb7921-71e0-4fa1-ba56-085f331e8a41-kube-api-access-kxtwg" (OuterVolumeSpecName: "kube-api-access-kxtwg") pod "63eb7921-71e0-4fa1-ba56-085f331e8a41" (UID: "63eb7921-71e0-4fa1-ba56-085f331e8a41"). InnerVolumeSpecName "kube-api-access-kxtwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.082024 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63eb7921-71e0-4fa1-ba56-085f331e8a41" (UID: "63eb7921-71e0-4fa1-ba56-085f331e8a41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.089623 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-config" (OuterVolumeSpecName: "config") pod "63eb7921-71e0-4fa1-ba56-085f331e8a41" (UID: "63eb7921-71e0-4fa1-ba56-085f331e8a41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.091005 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63eb7921-71e0-4fa1-ba56-085f331e8a41" (UID: "63eb7921-71e0-4fa1-ba56-085f331e8a41"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.095771 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63eb7921-71e0-4fa1-ba56-085f331e8a41" (UID: "63eb7921-71e0-4fa1-ba56-085f331e8a41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.097827 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gj54q" event={"ID":"63eb7921-71e0-4fa1-ba56-085f331e8a41","Type":"ContainerDied","Data":"f85e1c3373b4102e078cb4104c12d7d1a4e42c6d14af3d28db6854520f9de37a"} Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.097882 4771 scope.go:117] "RemoveContainer" containerID="dbdfeb7bb5216528b89fbd99bd7109b88474c7686c70a92fc061c5e68f142061" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.097977 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gj54q" Jan 29 09:24:26 crc kubenswrapper[4771]: E0129 09:24:26.099384 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified\\\"\"" pod="openstack/swift-ring-rebalance-snzfb" podUID="543a7e6c-ab47-4720-b5f0-6b0800904d36" Jan 29 09:24:26 crc kubenswrapper[4771]: E0129 09:24:26.099789 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-wl28d" podUID="2a9c1def-826f-4029-94c3-5670ce333c66" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.130922 4771 scope.go:117] "RemoveContainer" containerID="96049b868258091d10ae1183a42d93e1cf261ac012fa2fbb2b7576c9f8744648" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.136355 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lsfr8"] Jan 29 09:24:26 crc kubenswrapper[4771]: W0129 09:24:26.139228 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d3997d1_0e52_4674_b604_10b057141b3a.slice/crio-e2a5c4ecb261be69f6f52bcab6ee8659672b1ada3287a01aad7a78b1cecc4c8a WatchSource:0}: Error finding container e2a5c4ecb261be69f6f52bcab6ee8659672b1ada3287a01aad7a78b1cecc4c8a: Status 404 returned error can't find the container with id e2a5c4ecb261be69f6f52bcab6ee8659672b1ada3287a01aad7a78b1cecc4c8a Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.142054 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.142178 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxtwg\" (UniqueName: \"kubernetes.io/projected/63eb7921-71e0-4fa1-ba56-085f331e8a41-kube-api-access-kxtwg\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.142305 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.142383 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.142460 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63eb7921-71e0-4fa1-ba56-085f331e8a41-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.146013 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.163793 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gj54q"] Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.168858 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gj54q"] Jan 29 09:24:26 crc kubenswrapper[4771]: E0129 09:24:26.620050 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d3997d1_0e52_4674_b604_10b057141b3a.slice/crio-conmon-825913871c9299af1d45814e16a534c6023c7c4c599038cc75a8f0035b4a8829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d3997d1_0e52_4674_b604_10b057141b3a.slice/crio-825913871c9299af1d45814e16a534c6023c7c4c599038cc75a8f0035b4a8829.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:24:26 crc kubenswrapper[4771]: I0129 09:24:26.849508 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63eb7921-71e0-4fa1-ba56-085f331e8a41" path="/var/lib/kubelet/pods/63eb7921-71e0-4fa1-ba56-085f331e8a41/volumes" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.115470 4771 generic.go:334] "Generic (PLEG): container finished" podID="0d3997d1-0e52-4674-b604-10b057141b3a" containerID="825913871c9299af1d45814e16a534c6023c7c4c599038cc75a8f0035b4a8829" exitCode=0 Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.115511 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lsfr8" event={"ID":"0d3997d1-0e52-4674-b604-10b057141b3a","Type":"ContainerDied","Data":"825913871c9299af1d45814e16a534c6023c7c4c599038cc75a8f0035b4a8829"} Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.115533 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lsfr8" event={"ID":"0d3997d1-0e52-4674-b604-10b057141b3a","Type":"ContainerStarted","Data":"e2a5c4ecb261be69f6f52bcab6ee8659672b1ada3287a01aad7a78b1cecc4c8a"} Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.183818 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hqvn7" podUID="e1390576-f674-420d-93a7-2bee6d52f9f0" containerName="ovn-controller" probeResult="failure" output=< Jan 29 09:24:27 crc kubenswrapper[4771]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 09:24:27 crc kubenswrapper[4771]: > Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.205346 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.223263 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sjg8v" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.441067 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hqvn7-config-2nxff"] Jan 29 09:24:27 crc kubenswrapper[4771]: E0129 09:24:27.441465 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerName="init" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.441488 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerName="init" Jan 29 09:24:27 crc kubenswrapper[4771]: E0129 09:24:27.441530 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerName="dnsmasq-dns" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.441543 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerName="dnsmasq-dns" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.441733 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="63eb7921-71e0-4fa1-ba56-085f331e8a41" containerName="dnsmasq-dns" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.442573 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.445142 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.466291 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hqvn7-config-2nxff"] Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.566683 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-scripts\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.566836 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.566870 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run-ovn\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.566953 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-additional-scripts\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.566983 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-log-ovn\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.567056 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrz8\" (UniqueName: \"kubernetes.io/projected/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-kube-api-access-njrz8\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.668688 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njrz8\" (UniqueName: \"kubernetes.io/projected/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-kube-api-access-njrz8\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.668800 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-scripts\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.668852 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.668876 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run-ovn\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.668959 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-additional-scripts\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.668987 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-log-ovn\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.669203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run-ovn\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.669204 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.669210 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-log-ovn\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.669682 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-additional-scripts\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.670987 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-scripts\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.688508 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrz8\" (UniqueName: \"kubernetes.io/projected/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-kube-api-access-njrz8\") pod \"ovn-controller-hqvn7-config-2nxff\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:27 crc kubenswrapper[4771]: I0129 09:24:27.761145 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.126022 4771 generic.go:334] "Generic (PLEG): container finished" podID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerID="2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4" exitCode=0 Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.126125 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9abaa29e-0912-445b-a09f-5ce90865a13b","Type":"ContainerDied","Data":"2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4"} Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.127787 4771 generic.go:334] "Generic (PLEG): container finished" podID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" containerID="a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02" exitCode=0 Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.127984 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6","Type":"ContainerDied","Data":"a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02"} Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.275950 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hqvn7-config-2nxff"] Jan 29 09:24:28 crc kubenswrapper[4771]: W0129 09:24:28.302736 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7df96c22_4d9e_4f0c_95ec_9ffc3e226d7c.slice/crio-12592de8f81f62525679daf1c60827d6a4464cefa3c2c754b1ca1871302fe917 WatchSource:0}: Error finding container 12592de8f81f62525679daf1c60827d6a4464cefa3c2c754b1ca1871302fe917: Status 404 returned error can't find the container with id 12592de8f81f62525679daf1c60827d6a4464cefa3c2c754b1ca1871302fe917 Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.465871 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.489322 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3997d1-0e52-4674-b604-10b057141b3a-operator-scripts\") pod \"0d3997d1-0e52-4674-b604-10b057141b3a\" (UID: \"0d3997d1-0e52-4674-b604-10b057141b3a\") " Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.489424 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnqbt\" (UniqueName: \"kubernetes.io/projected/0d3997d1-0e52-4674-b604-10b057141b3a-kube-api-access-gnqbt\") pod \"0d3997d1-0e52-4674-b604-10b057141b3a\" (UID: \"0d3997d1-0e52-4674-b604-10b057141b3a\") " Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.490293 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d3997d1-0e52-4674-b604-10b057141b3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d3997d1-0e52-4674-b604-10b057141b3a" (UID: "0d3997d1-0e52-4674-b604-10b057141b3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.495894 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3997d1-0e52-4674-b604-10b057141b3a-kube-api-access-gnqbt" (OuterVolumeSpecName: "kube-api-access-gnqbt") pod "0d3997d1-0e52-4674-b604-10b057141b3a" (UID: "0d3997d1-0e52-4674-b604-10b057141b3a"). InnerVolumeSpecName "kube-api-access-gnqbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.591645 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3997d1-0e52-4674-b604-10b057141b3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:28 crc kubenswrapper[4771]: I0129 09:24:28.591680 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnqbt\" (UniqueName: \"kubernetes.io/projected/0d3997d1-0e52-4674-b604-10b057141b3a-kube-api-access-gnqbt\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.138471 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lsfr8" event={"ID":"0d3997d1-0e52-4674-b604-10b057141b3a","Type":"ContainerDied","Data":"e2a5c4ecb261be69f6f52bcab6ee8659672b1ada3287a01aad7a78b1cecc4c8a"} Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.139257 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a5c4ecb261be69f6f52bcab6ee8659672b1ada3287a01aad7a78b1cecc4c8a" Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.139185 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lsfr8" Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.141734 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9abaa29e-0912-445b-a09f-5ce90865a13b","Type":"ContainerStarted","Data":"c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5"} Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.142851 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.144944 4771 generic.go:334] "Generic (PLEG): container finished" podID="7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" containerID="e7301068cd18e70d6e5005a2e25bd0def388e88c9133dccbe13f5813c35e289e" exitCode=0 Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.145002 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqvn7-config-2nxff" event={"ID":"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c","Type":"ContainerDied","Data":"e7301068cd18e70d6e5005a2e25bd0def388e88c9133dccbe13f5813c35e289e"} Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.145023 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqvn7-config-2nxff" event={"ID":"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c","Type":"ContainerStarted","Data":"12592de8f81f62525679daf1c60827d6a4464cefa3c2c754b1ca1871302fe917"} Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.146927 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6","Type":"ContainerStarted","Data":"ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf"} Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.147566 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.181038 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.20459369 podStartE2EDuration="1m7.18101826s" podCreationTimestamp="2026-01-29 09:23:22 +0000 UTC" firstStartedPulling="2026-01-29 09:23:24.384425944 +0000 UTC m=+1024.507266161" lastFinishedPulling="2026-01-29 09:23:54.360850494 +0000 UTC m=+1054.483690731" observedRunningTime="2026-01-29 09:24:29.178785769 +0000 UTC m=+1089.301626006" watchObservedRunningTime="2026-01-29 09:24:29.18101826 +0000 UTC m=+1089.303858487" Jan 29 09:24:29 crc kubenswrapper[4771]: I0129 09:24:29.211820 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.435685376 podStartE2EDuration="1m7.211798588s" podCreationTimestamp="2026-01-29 09:23:22 +0000 UTC" firstStartedPulling="2026-01-29 09:23:24.48153792 +0000 UTC m=+1024.604378147" lastFinishedPulling="2026-01-29 09:23:54.257651132 +0000 UTC m=+1054.380491359" observedRunningTime="2026-01-29 09:24:29.208350404 +0000 UTC m=+1089.331190631" watchObservedRunningTime="2026-01-29 09:24:29.211798588 +0000 UTC m=+1089.334638815" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.531800 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.631924 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-log-ovn\") pod \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.631975 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-scripts\") pod \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632076 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run-ovn\") pod \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632115 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-additional-scripts\") pod \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632067 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" (UID: "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632145 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njrz8\" (UniqueName: \"kubernetes.io/projected/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-kube-api-access-njrz8\") pod \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632202 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" (UID: "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632232 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run\") pod \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\" (UID: \"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c\") " Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632594 4771 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632604 4771 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.632662 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run" (OuterVolumeSpecName: "var-run") pod "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" (UID: "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.633708 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" (UID: "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.634470 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-scripts" (OuterVolumeSpecName: "scripts") pod "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" (UID: "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.641231 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-kube-api-access-njrz8" (OuterVolumeSpecName: "kube-api-access-njrz8") pod "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" (UID: "7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c"). InnerVolumeSpecName "kube-api-access-njrz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.734835 4771 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.734895 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njrz8\" (UniqueName: \"kubernetes.io/projected/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-kube-api-access-njrz8\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.734938 4771 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:30 crc kubenswrapper[4771]: I0129 09:24:30.734949 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:31 crc kubenswrapper[4771]: I0129 09:24:31.164915 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqvn7-config-2nxff" event={"ID":"7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c","Type":"ContainerDied","Data":"12592de8f81f62525679daf1c60827d6a4464cefa3c2c754b1ca1871302fe917"} Jan 29 09:24:31 crc kubenswrapper[4771]: I0129 09:24:31.165245 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12592de8f81f62525679daf1c60827d6a4464cefa3c2c754b1ca1871302fe917" Jan 29 09:24:31 crc kubenswrapper[4771]: I0129 09:24:31.164997 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqvn7-config-2nxff" Jan 29 09:24:31 crc kubenswrapper[4771]: I0129 09:24:31.653119 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hqvn7-config-2nxff"] Jan 29 09:24:31 crc kubenswrapper[4771]: I0129 09:24:31.661095 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hqvn7-config-2nxff"] Jan 29 09:24:32 crc kubenswrapper[4771]: I0129 09:24:32.183876 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hqvn7" Jan 29 09:24:32 crc kubenswrapper[4771]: I0129 09:24:32.849525 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" path="/var/lib/kubelet/pods/7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c/volumes" Jan 29 09:24:41 crc kubenswrapper[4771]: I0129 09:24:41.838552 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:24:41 crc kubenswrapper[4771]: E0129 09:24:41.838908 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:24:41 crc kubenswrapper[4771]: E0129 09:24:41.839197 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:24:41 crc kubenswrapper[4771]: E0129 09:24:41.839307 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift podName:e6ce7b26-bcc5-4306-ab2c-5691cceeb18f nodeName:}" failed. No retries permitted until 2026-01-29 09:25:13.839283084 +0000 UTC m=+1133.962123311 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift") pod "swift-storage-0" (UID: "e6ce7b26-bcc5-4306-ab2c-5691cceeb18f") : configmap "swift-ring-files" not found Jan 29 09:24:43 crc kubenswrapper[4771]: I0129 09:24:43.841891 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:24:43 crc kubenswrapper[4771]: I0129 09:24:43.923899 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 09:24:44 crc kubenswrapper[4771]: I0129 09:24:44.287312 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wl28d" event={"ID":"2a9c1def-826f-4029-94c3-5670ce333c66","Type":"ContainerStarted","Data":"dc0b5dc6591c35e10a162bd6b094351f8c915395a6e4867c6d65917ddc0aa69d"} Jan 29 09:24:44 crc kubenswrapper[4771]: I0129 09:24:44.310968 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wl28d" podStartSLOduration=2.424188122 podStartE2EDuration="37.310947114s" podCreationTimestamp="2026-01-29 09:24:07 +0000 UTC" firstStartedPulling="2026-01-29 09:24:08.328653577 +0000 UTC m=+1068.451493804" lastFinishedPulling="2026-01-29 09:24:43.215412569 +0000 UTC m=+1103.338252796" observedRunningTime="2026-01-29 09:24:44.306197464 +0000 UTC m=+1104.429037691" watchObservedRunningTime="2026-01-29 09:24:44.310947114 +0000 UTC m=+1104.433787341" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.827809 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ftr47"] Jan 29 09:24:45 crc kubenswrapper[4771]: E0129 09:24:45.828172 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3997d1-0e52-4674-b604-10b057141b3a" containerName="mariadb-account-create-update" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.828185 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3997d1-0e52-4674-b604-10b057141b3a" containerName="mariadb-account-create-update" Jan 29 09:24:45 crc kubenswrapper[4771]: E0129 09:24:45.828196 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" containerName="ovn-config" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.828202 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" containerName="ovn-config" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.828388 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3997d1-0e52-4674-b604-10b057141b3a" containerName="mariadb-account-create-update" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.828409 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df96c22-4d9e-4f0c-95ec-9ffc3e226d7c" containerName="ovn-config" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.828968 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.879654 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ftr47"] Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.924235 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xswj\" (UniqueName: \"kubernetes.io/projected/84de2c50-e584-4134-82bd-5868077005af-kube-api-access-9xswj\") pod \"cinder-db-create-ftr47\" (UID: \"84de2c50-e584-4134-82bd-5868077005af\") " pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.924399 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84de2c50-e584-4134-82bd-5868077005af-operator-scripts\") pod \"cinder-db-create-ftr47\" (UID: \"84de2c50-e584-4134-82bd-5868077005af\") " pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.951416 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-krx65"] Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.953519 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-krx65" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.965903 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3781-account-create-update-64ds5"] Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.967248 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.971142 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 09:24:45 crc kubenswrapper[4771]: I0129 09:24:45.991117 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-krx65"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.000409 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3781-account-create-update-64ds5"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.027071 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84de2c50-e584-4134-82bd-5868077005af-operator-scripts\") pod \"cinder-db-create-ftr47\" (UID: \"84de2c50-e584-4134-82bd-5868077005af\") " pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.027171 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xswj\" (UniqueName: \"kubernetes.io/projected/84de2c50-e584-4134-82bd-5868077005af-kube-api-access-9xswj\") pod \"cinder-db-create-ftr47\" (UID: \"84de2c50-e584-4134-82bd-5868077005af\") " pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.028471 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84de2c50-e584-4134-82bd-5868077005af-operator-scripts\") pod \"cinder-db-create-ftr47\" (UID: \"84de2c50-e584-4134-82bd-5868077005af\") " pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.046358 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-63fb-account-create-update-xskfz"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.047898 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.062732 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-63fb-account-create-update-xskfz"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.066134 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.073336 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xswj\" (UniqueName: \"kubernetes.io/projected/84de2c50-e584-4134-82bd-5868077005af-kube-api-access-9xswj\") pod \"cinder-db-create-ftr47\" (UID: \"84de2c50-e584-4134-82bd-5868077005af\") " pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.130988 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-operator-scripts\") pod \"barbican-db-create-krx65\" (UID: \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\") " pod="openstack/barbican-db-create-krx65" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.131080 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrw29\" (UniqueName: \"kubernetes.io/projected/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-kube-api-access-qrw29\") pod \"barbican-3781-account-create-update-64ds5\" (UID: \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\") " pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.131228 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqsc5\" (UniqueName: \"kubernetes.io/projected/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-kube-api-access-qqsc5\") pod \"barbican-db-create-krx65\" (UID: \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\") " pod="openstack/barbican-db-create-krx65" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.131482 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-operator-scripts\") pod \"barbican-3781-account-create-update-64ds5\" (UID: \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\") " pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.148807 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.180544 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vrfbh"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.182712 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.188076 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-497zd" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.188588 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.188915 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.188914 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.194773 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vrfbh"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.206308 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-flpb9"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.208009 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.214444 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-flpb9"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.235397 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-operator-scripts\") pod \"barbican-3781-account-create-update-64ds5\" (UID: \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\") " pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.235506 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a486225-e2df-458e-8db7-5d5bc40e7fe4-operator-scripts\") pod \"cinder-63fb-account-create-update-xskfz\" (UID: \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\") " pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.235582 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2cg\" (UniqueName: \"kubernetes.io/projected/5a486225-e2df-458e-8db7-5d5bc40e7fe4-kube-api-access-qk2cg\") pod \"cinder-63fb-account-create-update-xskfz\" (UID: \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\") " pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.235620 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-operator-scripts\") pod \"barbican-db-create-krx65\" (UID: \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\") " pod="openstack/barbican-db-create-krx65" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.235656 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrw29\" (UniqueName: \"kubernetes.io/projected/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-kube-api-access-qrw29\") pod \"barbican-3781-account-create-update-64ds5\" (UID: \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\") " pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.235773 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqsc5\" (UniqueName: \"kubernetes.io/projected/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-kube-api-access-qqsc5\") pod \"barbican-db-create-krx65\" (UID: \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\") " pod="openstack/barbican-db-create-krx65" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.236747 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-operator-scripts\") pod \"barbican-3781-account-create-update-64ds5\" (UID: \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\") " pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.237246 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-operator-scripts\") pod \"barbican-db-create-krx65\" (UID: \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\") " pod="openstack/barbican-db-create-krx65" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.261311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrw29\" (UniqueName: \"kubernetes.io/projected/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-kube-api-access-qrw29\") pod \"barbican-3781-account-create-update-64ds5\" (UID: \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\") " pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.266275 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqsc5\" (UniqueName: \"kubernetes.io/projected/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-kube-api-access-qqsc5\") pod \"barbican-db-create-krx65\" (UID: \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\") " pod="openstack/barbican-db-create-krx65" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.277303 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-krx65" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.301107 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.330578 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f3e6-account-create-update-p6nfc"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.332214 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.335712 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.337272 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmzm\" (UniqueName: \"kubernetes.io/projected/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-kube-api-access-zcmzm\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.337465 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a486225-e2df-458e-8db7-5d5bc40e7fe4-operator-scripts\") pod \"cinder-63fb-account-create-update-xskfz\" (UID: \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\") " pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.337603 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-config-data\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.337737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk2cg\" (UniqueName: \"kubernetes.io/projected/5a486225-e2df-458e-8db7-5d5bc40e7fe4-kube-api-access-qk2cg\") pod \"cinder-63fb-account-create-update-xskfz\" (UID: \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\") " pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.337873 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-combined-ca-bundle\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.337983 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aef8563f-f9a9-4ca6-a211-a7b1745337bf-operator-scripts\") pod \"neutron-db-create-flpb9\" (UID: \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\") " pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.338158 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknb4\" (UniqueName: \"kubernetes.io/projected/aef8563f-f9a9-4ca6-a211-a7b1745337bf-kube-api-access-wknb4\") pod \"neutron-db-create-flpb9\" (UID: \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\") " pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.338727 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a486225-e2df-458e-8db7-5d5bc40e7fe4-operator-scripts\") pod \"cinder-63fb-account-create-update-xskfz\" (UID: \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\") " pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.341773 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f3e6-account-create-update-p6nfc"] Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.364096 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk2cg\" (UniqueName: \"kubernetes.io/projected/5a486225-e2df-458e-8db7-5d5bc40e7fe4-kube-api-access-qk2cg\") pod \"cinder-63fb-account-create-update-xskfz\" (UID: \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\") " pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.411749 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.439852 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbfv\" (UniqueName: \"kubernetes.io/projected/abdcffc7-132e-4569-9a16-cae3202fcab8-kube-api-access-pcbfv\") pod \"neutron-f3e6-account-create-update-p6nfc\" (UID: \"abdcffc7-132e-4569-9a16-cae3202fcab8\") " pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.440280 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdcffc7-132e-4569-9a16-cae3202fcab8-operator-scripts\") pod \"neutron-f3e6-account-create-update-p6nfc\" (UID: \"abdcffc7-132e-4569-9a16-cae3202fcab8\") " pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.440425 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmzm\" (UniqueName: \"kubernetes.io/projected/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-kube-api-access-zcmzm\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.440569 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-config-data\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.440762 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-combined-ca-bundle\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.441255 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aef8563f-f9a9-4ca6-a211-a7b1745337bf-operator-scripts\") pod \"neutron-db-create-flpb9\" (UID: \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\") " pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.442041 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aef8563f-f9a9-4ca6-a211-a7b1745337bf-operator-scripts\") pod \"neutron-db-create-flpb9\" (UID: \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\") " pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.444484 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-config-data\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.446016 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknb4\" (UniqueName: \"kubernetes.io/projected/aef8563f-f9a9-4ca6-a211-a7b1745337bf-kube-api-access-wknb4\") pod \"neutron-db-create-flpb9\" (UID: \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\") " pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.447214 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-combined-ca-bundle\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.459850 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmzm\" (UniqueName: \"kubernetes.io/projected/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-kube-api-access-zcmzm\") pod \"keystone-db-sync-vrfbh\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.462463 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknb4\" (UniqueName: \"kubernetes.io/projected/aef8563f-f9a9-4ca6-a211-a7b1745337bf-kube-api-access-wknb4\") pod \"neutron-db-create-flpb9\" (UID: \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\") " pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.515831 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.534952 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.547406 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcbfv\" (UniqueName: \"kubernetes.io/projected/abdcffc7-132e-4569-9a16-cae3202fcab8-kube-api-access-pcbfv\") pod \"neutron-f3e6-account-create-update-p6nfc\" (UID: \"abdcffc7-132e-4569-9a16-cae3202fcab8\") " pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.547476 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdcffc7-132e-4569-9a16-cae3202fcab8-operator-scripts\") pod \"neutron-f3e6-account-create-update-p6nfc\" (UID: \"abdcffc7-132e-4569-9a16-cae3202fcab8\") " pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.548226 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdcffc7-132e-4569-9a16-cae3202fcab8-operator-scripts\") pod \"neutron-f3e6-account-create-update-p6nfc\" (UID: \"abdcffc7-132e-4569-9a16-cae3202fcab8\") " pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.566404 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcbfv\" (UniqueName: \"kubernetes.io/projected/abdcffc7-132e-4569-9a16-cae3202fcab8-kube-api-access-pcbfv\") pod \"neutron-f3e6-account-create-update-p6nfc\" (UID: \"abdcffc7-132e-4569-9a16-cae3202fcab8\") " pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:46 crc kubenswrapper[4771]: I0129 09:24:46.663690 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.210011 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-krx65"] Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.311368 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ftr47"] Jan 29 09:24:48 crc kubenswrapper[4771]: W0129 09:24:48.313525 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84de2c50_e584_4134_82bd_5868077005af.slice/crio-d3ba15992e6efcfd8b051435475f492303ca4260c6d1af086437e129f4fdc8ab WatchSource:0}: Error finding container d3ba15992e6efcfd8b051435475f492303ca4260c6d1af086437e129f4fdc8ab: Status 404 returned error can't find the container with id d3ba15992e6efcfd8b051435475f492303ca4260c6d1af086437e129f4fdc8ab Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.328723 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-krx65" event={"ID":"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd","Type":"ContainerStarted","Data":"1df051e6d433a0f09fa67f1fc1dce5ea69f99f4a3d1b89f50060f6be0208ac3e"} Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.329977 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ftr47" event={"ID":"84de2c50-e584-4134-82bd-5868077005af","Type":"ContainerStarted","Data":"d3ba15992e6efcfd8b051435475f492303ca4260c6d1af086437e129f4fdc8ab"} Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.333457 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-snzfb" event={"ID":"543a7e6c-ab47-4720-b5f0-6b0800904d36","Type":"ContainerStarted","Data":"c11180a095aaecfcf420210cbb6ff664b97b7dcc99e5e4a7e8fe3c190ac96f70"} Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.359052 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-snzfb" podStartSLOduration=2.160212592 podStartE2EDuration="35.359033436s" podCreationTimestamp="2026-01-29 09:24:13 +0000 UTC" firstStartedPulling="2026-01-29 09:24:14.533297702 +0000 UTC m=+1074.656137929" lastFinishedPulling="2026-01-29 09:24:47.732118546 +0000 UTC m=+1107.854958773" observedRunningTime="2026-01-29 09:24:48.355267883 +0000 UTC m=+1108.478108130" watchObservedRunningTime="2026-01-29 09:24:48.359033436 +0000 UTC m=+1108.481873663" Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.447789 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3781-account-create-update-64ds5"] Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.457723 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vrfbh"] Jan 29 09:24:48 crc kubenswrapper[4771]: W0129 09:24:48.618285 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaef8563f_f9a9_4ca6_a211_a7b1745337bf.slice/crio-e1dd0ac27055d2692d99be929e2822eb6d6db35d1d7ecc7aee6ecc8d40e1adb2 WatchSource:0}: Error finding container e1dd0ac27055d2692d99be929e2822eb6d6db35d1d7ecc7aee6ecc8d40e1adb2: Status 404 returned error can't find the container with id e1dd0ac27055d2692d99be929e2822eb6d6db35d1d7ecc7aee6ecc8d40e1adb2 Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.619289 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-flpb9"] Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.640942 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-63fb-account-create-update-xskfz"] Jan 29 09:24:48 crc kubenswrapper[4771]: W0129 09:24:48.649033 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a486225_e2df_458e_8db7_5d5bc40e7fe4.slice/crio-143c5105550ab0832adf8c728c2f418ced2f618f85f97f6449c138840038ff49 WatchSource:0}: Error finding container 143c5105550ab0832adf8c728c2f418ced2f618f85f97f6449c138840038ff49: Status 404 returned error can't find the container with id 143c5105550ab0832adf8c728c2f418ced2f618f85f97f6449c138840038ff49 Jan 29 09:24:48 crc kubenswrapper[4771]: I0129 09:24:48.649169 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f3e6-account-create-update-p6nfc"] Jan 29 09:24:48 crc kubenswrapper[4771]: W0129 09:24:48.659612 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabdcffc7_132e_4569_9a16_cae3202fcab8.slice/crio-cf75df8820dd34d2d0fdf68a5e526cbbf1423ad27b6ce8eabd788df8d5979303 WatchSource:0}: Error finding container cf75df8820dd34d2d0fdf68a5e526cbbf1423ad27b6ce8eabd788df8d5979303: Status 404 returned error can't find the container with id cf75df8820dd34d2d0fdf68a5e526cbbf1423ad27b6ce8eabd788df8d5979303 Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.347864 4771 generic.go:334] "Generic (PLEG): container finished" podID="cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd" containerID="249ecd70e7c7b374e321a1b6edfc01dbb806f6f56f85ed815998d59ab3c7e6e5" exitCode=0 Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.347964 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-krx65" event={"ID":"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd","Type":"ContainerDied","Data":"249ecd70e7c7b374e321a1b6edfc01dbb806f6f56f85ed815998d59ab3c7e6e5"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.351554 4771 generic.go:334] "Generic (PLEG): container finished" podID="84de2c50-e584-4134-82bd-5868077005af" containerID="15e6be68d699d6de5d45565fc9a34c3da2cbb0b4979d3b469bbe1a9aec42a6a6" exitCode=0 Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.351621 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ftr47" event={"ID":"84de2c50-e584-4134-82bd-5868077005af","Type":"ContainerDied","Data":"15e6be68d699d6de5d45565fc9a34c3da2cbb0b4979d3b469bbe1a9aec42a6a6"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.353149 4771 generic.go:334] "Generic (PLEG): container finished" podID="ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72" containerID="960c016c3255df5162075f77e1567fb0572043547391ed97fa6033c12dda713e" exitCode=0 Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.353188 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3781-account-create-update-64ds5" event={"ID":"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72","Type":"ContainerDied","Data":"960c016c3255df5162075f77e1567fb0572043547391ed97fa6033c12dda713e"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.353203 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3781-account-create-update-64ds5" event={"ID":"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72","Type":"ContainerStarted","Data":"d6861db3c51654bffa03f9325eb34e5c3b36186c45684c7cd342227f25e82ea2"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.355208 4771 generic.go:334] "Generic (PLEG): container finished" podID="aef8563f-f9a9-4ca6-a211-a7b1745337bf" containerID="0be693f3165b2f1f17525c09c9d6297b15e5a3ffb976f1933f2a65406b7b1d1d" exitCode=0 Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.355335 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-flpb9" event={"ID":"aef8563f-f9a9-4ca6-a211-a7b1745337bf","Type":"ContainerDied","Data":"0be693f3165b2f1f17525c09c9d6297b15e5a3ffb976f1933f2a65406b7b1d1d"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.355437 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-flpb9" event={"ID":"aef8563f-f9a9-4ca6-a211-a7b1745337bf","Type":"ContainerStarted","Data":"e1dd0ac27055d2692d99be929e2822eb6d6db35d1d7ecc7aee6ecc8d40e1adb2"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.357560 4771 generic.go:334] "Generic (PLEG): container finished" podID="abdcffc7-132e-4569-9a16-cae3202fcab8" containerID="06cb6c592df78188c827b6ed63abfbb10933762c80e2ffb812249b011e10533c" exitCode=0 Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.357614 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f3e6-account-create-update-p6nfc" event={"ID":"abdcffc7-132e-4569-9a16-cae3202fcab8","Type":"ContainerDied","Data":"06cb6c592df78188c827b6ed63abfbb10933762c80e2ffb812249b011e10533c"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.357635 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f3e6-account-create-update-p6nfc" event={"ID":"abdcffc7-132e-4569-9a16-cae3202fcab8","Type":"ContainerStarted","Data":"cf75df8820dd34d2d0fdf68a5e526cbbf1423ad27b6ce8eabd788df8d5979303"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.359575 4771 generic.go:334] "Generic (PLEG): container finished" podID="5a486225-e2df-458e-8db7-5d5bc40e7fe4" containerID="60a6d3abd8df862a73d1ff8180e088cb7adb003ad5b653d9379bb5316d4d3ed5" exitCode=0 Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.359613 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-63fb-account-create-update-xskfz" event={"ID":"5a486225-e2df-458e-8db7-5d5bc40e7fe4","Type":"ContainerDied","Data":"60a6d3abd8df862a73d1ff8180e088cb7adb003ad5b653d9379bb5316d4d3ed5"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.359629 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-63fb-account-create-update-xskfz" event={"ID":"5a486225-e2df-458e-8db7-5d5bc40e7fe4","Type":"ContainerStarted","Data":"143c5105550ab0832adf8c728c2f418ced2f618f85f97f6449c138840038ff49"} Jan 29 09:24:49 crc kubenswrapper[4771]: I0129 09:24:49.374407 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vrfbh" event={"ID":"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29","Type":"ContainerStarted","Data":"a1fa9b29be56f7eb1e070640079026ecc978d47ce65a3b58daadd8618cfcf088"} Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.225933 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.233634 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-krx65" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.281576 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.287579 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.291948 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aef8563f-f9a9-4ca6-a211-a7b1745337bf-operator-scripts\") pod \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\" (UID: \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.292024 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-operator-scripts\") pod \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\" (UID: \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.292142 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wknb4\" (UniqueName: \"kubernetes.io/projected/aef8563f-f9a9-4ca6-a211-a7b1745337bf-kube-api-access-wknb4\") pod \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\" (UID: \"aef8563f-f9a9-4ca6-a211-a7b1745337bf\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.292197 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqsc5\" (UniqueName: \"kubernetes.io/projected/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-kube-api-access-qqsc5\") pod \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\" (UID: \"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.294059 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd" (UID: "cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.294679 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef8563f-f9a9-4ca6-a211-a7b1745337bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aef8563f-f9a9-4ca6-a211-a7b1745337bf" (UID: "aef8563f-f9a9-4ca6-a211-a7b1745337bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.295130 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.306753 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-kube-api-access-qqsc5" (OuterVolumeSpecName: "kube-api-access-qqsc5") pod "cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd" (UID: "cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd"). InnerVolumeSpecName "kube-api-access-qqsc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.316965 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef8563f-f9a9-4ca6-a211-a7b1745337bf-kube-api-access-wknb4" (OuterVolumeSpecName: "kube-api-access-wknb4") pod "aef8563f-f9a9-4ca6-a211-a7b1745337bf" (UID: "aef8563f-f9a9-4ca6-a211-a7b1745337bf"). InnerVolumeSpecName "kube-api-access-wknb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.349023 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.393894 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xswj\" (UniqueName: \"kubernetes.io/projected/84de2c50-e584-4134-82bd-5868077005af-kube-api-access-9xswj\") pod \"84de2c50-e584-4134-82bd-5868077005af\" (UID: \"84de2c50-e584-4134-82bd-5868077005af\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.393959 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcbfv\" (UniqueName: \"kubernetes.io/projected/abdcffc7-132e-4569-9a16-cae3202fcab8-kube-api-access-pcbfv\") pod \"abdcffc7-132e-4569-9a16-cae3202fcab8\" (UID: \"abdcffc7-132e-4569-9a16-cae3202fcab8\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394012 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84de2c50-e584-4134-82bd-5868077005af-operator-scripts\") pod \"84de2c50-e584-4134-82bd-5868077005af\" (UID: \"84de2c50-e584-4134-82bd-5868077005af\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394051 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdcffc7-132e-4569-9a16-cae3202fcab8-operator-scripts\") pod \"abdcffc7-132e-4569-9a16-cae3202fcab8\" (UID: \"abdcffc7-132e-4569-9a16-cae3202fcab8\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394103 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrw29\" (UniqueName: \"kubernetes.io/projected/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-kube-api-access-qrw29\") pod \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\" (UID: \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394145 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk2cg\" (UniqueName: \"kubernetes.io/projected/5a486225-e2df-458e-8db7-5d5bc40e7fe4-kube-api-access-qk2cg\") pod \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\" (UID: \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394222 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-operator-scripts\") pod \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\" (UID: \"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394253 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a486225-e2df-458e-8db7-5d5bc40e7fe4-operator-scripts\") pod \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\" (UID: \"5a486225-e2df-458e-8db7-5d5bc40e7fe4\") " Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394782 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aef8563f-f9a9-4ca6-a211-a7b1745337bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394788 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abdcffc7-132e-4569-9a16-cae3202fcab8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abdcffc7-132e-4569-9a16-cae3202fcab8" (UID: "abdcffc7-132e-4569-9a16-cae3202fcab8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394809 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394825 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wknb4\" (UniqueName: \"kubernetes.io/projected/aef8563f-f9a9-4ca6-a211-a7b1745337bf-kube-api-access-wknb4\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.394839 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqsc5\" (UniqueName: \"kubernetes.io/projected/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd-kube-api-access-qqsc5\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.395239 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a486225-e2df-458e-8db7-5d5bc40e7fe4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a486225-e2df-458e-8db7-5d5bc40e7fe4" (UID: "5a486225-e2df-458e-8db7-5d5bc40e7fe4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.395858 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72" (UID: "ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.396762 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84de2c50-e584-4134-82bd-5868077005af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84de2c50-e584-4134-82bd-5868077005af" (UID: "84de2c50-e584-4134-82bd-5868077005af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.397249 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84de2c50-e584-4134-82bd-5868077005af-kube-api-access-9xswj" (OuterVolumeSpecName: "kube-api-access-9xswj") pod "84de2c50-e584-4134-82bd-5868077005af" (UID: "84de2c50-e584-4134-82bd-5868077005af"). InnerVolumeSpecName "kube-api-access-9xswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.399294 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-kube-api-access-qrw29" (OuterVolumeSpecName: "kube-api-access-qrw29") pod "ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72" (UID: "ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72"). InnerVolumeSpecName "kube-api-access-qrw29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.399523 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a486225-e2df-458e-8db7-5d5bc40e7fe4-kube-api-access-qk2cg" (OuterVolumeSpecName: "kube-api-access-qk2cg") pod "5a486225-e2df-458e-8db7-5d5bc40e7fe4" (UID: "5a486225-e2df-458e-8db7-5d5bc40e7fe4"). InnerVolumeSpecName "kube-api-access-qk2cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.399635 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abdcffc7-132e-4569-9a16-cae3202fcab8-kube-api-access-pcbfv" (OuterVolumeSpecName: "kube-api-access-pcbfv") pod "abdcffc7-132e-4569-9a16-cae3202fcab8" (UID: "abdcffc7-132e-4569-9a16-cae3202fcab8"). InnerVolumeSpecName "kube-api-access-pcbfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.409015 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3781-account-create-update-64ds5" event={"ID":"ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72","Type":"ContainerDied","Data":"d6861db3c51654bffa03f9325eb34e5c3b36186c45684c7cd342227f25e82ea2"} Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.409061 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6861db3c51654bffa03f9325eb34e5c3b36186c45684c7cd342227f25e82ea2" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.409101 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3781-account-create-update-64ds5" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.410544 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-flpb9" event={"ID":"aef8563f-f9a9-4ca6-a211-a7b1745337bf","Type":"ContainerDied","Data":"e1dd0ac27055d2692d99be929e2822eb6d6db35d1d7ecc7aee6ecc8d40e1adb2"} Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.410623 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1dd0ac27055d2692d99be929e2822eb6d6db35d1d7ecc7aee6ecc8d40e1adb2" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.410558 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-flpb9" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.411859 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f3e6-account-create-update-p6nfc" event={"ID":"abdcffc7-132e-4569-9a16-cae3202fcab8","Type":"ContainerDied","Data":"cf75df8820dd34d2d0fdf68a5e526cbbf1423ad27b6ce8eabd788df8d5979303"} Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.411915 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf75df8820dd34d2d0fdf68a5e526cbbf1423ad27b6ce8eabd788df8d5979303" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.411870 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f3e6-account-create-update-p6nfc" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.413443 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-63fb-account-create-update-xskfz" event={"ID":"5a486225-e2df-458e-8db7-5d5bc40e7fe4","Type":"ContainerDied","Data":"143c5105550ab0832adf8c728c2f418ced2f618f85f97f6449c138840038ff49"} Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.413575 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="143c5105550ab0832adf8c728c2f418ced2f618f85f97f6449c138840038ff49" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.413717 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-63fb-account-create-update-xskfz" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.422070 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vrfbh" event={"ID":"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29","Type":"ContainerStarted","Data":"12cce7233911ce2afbecc7f41abeb2339f7c9c0f50aa596dea3521b06f33923d"} Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.427035 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-krx65" event={"ID":"cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd","Type":"ContainerDied","Data":"1df051e6d433a0f09fa67f1fc1dce5ea69f99f4a3d1b89f50060f6be0208ac3e"} Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.427075 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df051e6d433a0f09fa67f1fc1dce5ea69f99f4a3d1b89f50060f6be0208ac3e" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.427129 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-krx65" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.429973 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ftr47" event={"ID":"84de2c50-e584-4134-82bd-5868077005af","Type":"ContainerDied","Data":"d3ba15992e6efcfd8b051435475f492303ca4260c6d1af086437e129f4fdc8ab"} Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.430013 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ba15992e6efcfd8b051435475f492303ca4260c6d1af086437e129f4fdc8ab" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.430079 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ftr47" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.496915 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcbfv\" (UniqueName: \"kubernetes.io/projected/abdcffc7-132e-4569-9a16-cae3202fcab8-kube-api-access-pcbfv\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.496951 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84de2c50-e584-4134-82bd-5868077005af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.496960 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abdcffc7-132e-4569-9a16-cae3202fcab8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.496969 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrw29\" (UniqueName: \"kubernetes.io/projected/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-kube-api-access-qrw29\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.496978 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk2cg\" (UniqueName: \"kubernetes.io/projected/5a486225-e2df-458e-8db7-5d5bc40e7fe4-kube-api-access-qk2cg\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.496987 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.496996 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a486225-e2df-458e-8db7-5d5bc40e7fe4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:53 crc kubenswrapper[4771]: I0129 09:24:53.497004 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xswj\" (UniqueName: \"kubernetes.io/projected/84de2c50-e584-4134-82bd-5868077005af-kube-api-access-9xswj\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:54 crc kubenswrapper[4771]: I0129 09:24:54.259519 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vrfbh" podStartSLOduration=3.723284455 podStartE2EDuration="8.259498834s" podCreationTimestamp="2026-01-29 09:24:46 +0000 UTC" firstStartedPulling="2026-01-29 09:24:48.46858806 +0000 UTC m=+1108.591428287" lastFinishedPulling="2026-01-29 09:24:53.004802439 +0000 UTC m=+1113.127642666" observedRunningTime="2026-01-29 09:24:53.443206219 +0000 UTC m=+1113.566046476" watchObservedRunningTime="2026-01-29 09:24:54.259498834 +0000 UTC m=+1114.382339061" Jan 29 09:24:54 crc kubenswrapper[4771]: I0129 09:24:54.440301 4771 generic.go:334] "Generic (PLEG): container finished" podID="2a9c1def-826f-4029-94c3-5670ce333c66" containerID="dc0b5dc6591c35e10a162bd6b094351f8c915395a6e4867c6d65917ddc0aa69d" exitCode=0 Jan 29 09:24:54 crc kubenswrapper[4771]: I0129 09:24:54.440596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wl28d" event={"ID":"2a9c1def-826f-4029-94c3-5670ce333c66","Type":"ContainerDied","Data":"dc0b5dc6591c35e10a162bd6b094351f8c915395a6e4867c6d65917ddc0aa69d"} Jan 29 09:24:55 crc kubenswrapper[4771]: I0129 09:24:55.944743 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.043835 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgt45\" (UniqueName: \"kubernetes.io/projected/2a9c1def-826f-4029-94c3-5670ce333c66-kube-api-access-xgt45\") pod \"2a9c1def-826f-4029-94c3-5670ce333c66\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.043966 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-db-sync-config-data\") pod \"2a9c1def-826f-4029-94c3-5670ce333c66\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.044012 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-config-data\") pod \"2a9c1def-826f-4029-94c3-5670ce333c66\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.044077 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-combined-ca-bundle\") pod \"2a9c1def-826f-4029-94c3-5670ce333c66\" (UID: \"2a9c1def-826f-4029-94c3-5670ce333c66\") " Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.050836 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9c1def-826f-4029-94c3-5670ce333c66-kube-api-access-xgt45" (OuterVolumeSpecName: "kube-api-access-xgt45") pod "2a9c1def-826f-4029-94c3-5670ce333c66" (UID: "2a9c1def-826f-4029-94c3-5670ce333c66"). InnerVolumeSpecName "kube-api-access-xgt45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.051179 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2a9c1def-826f-4029-94c3-5670ce333c66" (UID: "2a9c1def-826f-4029-94c3-5670ce333c66"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.071822 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a9c1def-826f-4029-94c3-5670ce333c66" (UID: "2a9c1def-826f-4029-94c3-5670ce333c66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.099850 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-config-data" (OuterVolumeSpecName: "config-data") pod "2a9c1def-826f-4029-94c3-5670ce333c66" (UID: "2a9c1def-826f-4029-94c3-5670ce333c66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.146082 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgt45\" (UniqueName: \"kubernetes.io/projected/2a9c1def-826f-4029-94c3-5670ce333c66-kube-api-access-xgt45\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.146125 4771 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.146136 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.146154 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9c1def-826f-4029-94c3-5670ce333c66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.461479 4771 generic.go:334] "Generic (PLEG): container finished" podID="543a7e6c-ab47-4720-b5f0-6b0800904d36" containerID="c11180a095aaecfcf420210cbb6ff664b97b7dcc99e5e4a7e8fe3c190ac96f70" exitCode=0 Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.461551 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-snzfb" event={"ID":"543a7e6c-ab47-4720-b5f0-6b0800904d36","Type":"ContainerDied","Data":"c11180a095aaecfcf420210cbb6ff664b97b7dcc99e5e4a7e8fe3c190ac96f70"} Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.467301 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wl28d" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.467783 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wl28d" event={"ID":"2a9c1def-826f-4029-94c3-5670ce333c66","Type":"ContainerDied","Data":"b22c45ace1d50986213ba0432464963044cd57e1d13cd38f33d541ab48c4ba68"} Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.467825 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22c45ace1d50986213ba0432464963044cd57e1d13cd38f33d541ab48c4ba68" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.849538 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-zc5rm"] Jan 29 09:24:56 crc kubenswrapper[4771]: E0129 09:24:56.850510 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a9c1def-826f-4029-94c3-5670ce333c66" containerName="glance-db-sync" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.850584 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a9c1def-826f-4029-94c3-5670ce333c66" containerName="glance-db-sync" Jan 29 09:24:56 crc kubenswrapper[4771]: E0129 09:24:56.850665 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.850758 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: E0129 09:24:56.850841 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84de2c50-e584-4134-82bd-5868077005af" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.850898 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="84de2c50-e584-4134-82bd-5868077005af" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: E0129 09:24:56.850974 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abdcffc7-132e-4569-9a16-cae3202fcab8" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.851052 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="abdcffc7-132e-4569-9a16-cae3202fcab8" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: E0129 09:24:56.851130 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a486225-e2df-458e-8db7-5d5bc40e7fe4" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.851207 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a486225-e2df-458e-8db7-5d5bc40e7fe4" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: E0129 09:24:56.851295 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef8563f-f9a9-4ca6-a211-a7b1745337bf" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.851360 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef8563f-f9a9-4ca6-a211-a7b1745337bf" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: E0129 09:24:56.851430 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.851497 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.851793 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.851883 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.851947 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a486225-e2df-458e-8db7-5d5bc40e7fe4" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.852010 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef8563f-f9a9-4ca6-a211-a7b1745337bf" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.852089 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="abdcffc7-132e-4569-9a16-cae3202fcab8" containerName="mariadb-account-create-update" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.852161 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="84de2c50-e584-4134-82bd-5868077005af" containerName="mariadb-database-create" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.852223 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a9c1def-826f-4029-94c3-5670ce333c66" containerName="glance-db-sync" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.853344 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.860051 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.860341 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-dns-svc\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.860459 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-config\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.860574 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.860671 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbl56\" (UniqueName: \"kubernetes.io/projected/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-kube-api-access-fbl56\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.860348 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-zc5rm"] Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.961973 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-dns-svc\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.962634 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-config\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.962907 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.963017 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbl56\" (UniqueName: \"kubernetes.io/projected/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-kube-api-access-fbl56\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.963132 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.963189 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-dns-svc\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.963375 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-config\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.963536 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.963973 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:56 crc kubenswrapper[4771]: I0129 09:24:56.979399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbl56\" (UniqueName: \"kubernetes.io/projected/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-kube-api-access-fbl56\") pod \"dnsmasq-dns-74dc88fc-zc5rm\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.183143 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.479157 4771 generic.go:334] "Generic (PLEG): container finished" podID="94fc0688-0a4c-48f4-83c6-2aa6bbfcde29" containerID="12cce7233911ce2afbecc7f41abeb2339f7c9c0f50aa596dea3521b06f33923d" exitCode=0 Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.479997 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vrfbh" event={"ID":"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29","Type":"ContainerDied","Data":"12cce7233911ce2afbecc7f41abeb2339f7c9c0f50aa596dea3521b06f33923d"} Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.688913 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-zc5rm"] Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.788538 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.985052 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-swiftconf\") pod \"543a7e6c-ab47-4720-b5f0-6b0800904d36\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.985105 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzbcm\" (UniqueName: \"kubernetes.io/projected/543a7e6c-ab47-4720-b5f0-6b0800904d36-kube-api-access-mzbcm\") pod \"543a7e6c-ab47-4720-b5f0-6b0800904d36\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.985152 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-combined-ca-bundle\") pod \"543a7e6c-ab47-4720-b5f0-6b0800904d36\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.985194 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-dispersionconf\") pod \"543a7e6c-ab47-4720-b5f0-6b0800904d36\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.985248 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/543a7e6c-ab47-4720-b5f0-6b0800904d36-etc-swift\") pod \"543a7e6c-ab47-4720-b5f0-6b0800904d36\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.985335 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-ring-data-devices\") pod \"543a7e6c-ab47-4720-b5f0-6b0800904d36\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.985390 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-scripts\") pod \"543a7e6c-ab47-4720-b5f0-6b0800904d36\" (UID: \"543a7e6c-ab47-4720-b5f0-6b0800904d36\") " Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.986280 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "543a7e6c-ab47-4720-b5f0-6b0800904d36" (UID: "543a7e6c-ab47-4720-b5f0-6b0800904d36"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.986593 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543a7e6c-ab47-4720-b5f0-6b0800904d36-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "543a7e6c-ab47-4720-b5f0-6b0800904d36" (UID: "543a7e6c-ab47-4720-b5f0-6b0800904d36"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.991923 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543a7e6c-ab47-4720-b5f0-6b0800904d36-kube-api-access-mzbcm" (OuterVolumeSpecName: "kube-api-access-mzbcm") pod "543a7e6c-ab47-4720-b5f0-6b0800904d36" (UID: "543a7e6c-ab47-4720-b5f0-6b0800904d36"). InnerVolumeSpecName "kube-api-access-mzbcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:57 crc kubenswrapper[4771]: I0129 09:24:57.998898 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "543a7e6c-ab47-4720-b5f0-6b0800904d36" (UID: "543a7e6c-ab47-4720-b5f0-6b0800904d36"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.010922 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "543a7e6c-ab47-4720-b5f0-6b0800904d36" (UID: "543a7e6c-ab47-4720-b5f0-6b0800904d36"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.016853 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "543a7e6c-ab47-4720-b5f0-6b0800904d36" (UID: "543a7e6c-ab47-4720-b5f0-6b0800904d36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.018639 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-scripts" (OuterVolumeSpecName: "scripts") pod "543a7e6c-ab47-4720-b5f0-6b0800904d36" (UID: "543a7e6c-ab47-4720-b5f0-6b0800904d36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.087387 4771 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.087448 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/543a7e6c-ab47-4720-b5f0-6b0800904d36-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.087457 4771 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.087470 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzbcm\" (UniqueName: \"kubernetes.io/projected/543a7e6c-ab47-4720-b5f0-6b0800904d36-kube-api-access-mzbcm\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.087489 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.087500 4771 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/543a7e6c-ab47-4720-b5f0-6b0800904d36-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.087509 4771 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/543a7e6c-ab47-4720-b5f0-6b0800904d36-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.487914 4771 generic.go:334] "Generic (PLEG): container finished" podID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" containerID="a58252355259db6d970171fe7913edc09331545313fde451112668de56008d89" exitCode=0 Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.488003 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" event={"ID":"cb0c00c1-1460-4ff7-ae82-03acda6d79ef","Type":"ContainerDied","Data":"a58252355259db6d970171fe7913edc09331545313fde451112668de56008d89"} Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.488035 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" event={"ID":"cb0c00c1-1460-4ff7-ae82-03acda6d79ef","Type":"ContainerStarted","Data":"7f0b9e0424a5281e95ab5db6143e4e57aeb9e8a2374f75a658460090c0bd8fdd"} Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.492167 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-snzfb" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.492157 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-snzfb" event={"ID":"543a7e6c-ab47-4720-b5f0-6b0800904d36","Type":"ContainerDied","Data":"1b458285a354ca0cba7373b2e1eefd1c5ea0d7669206302ef4bcfce52ccf3526"} Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.492330 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b458285a354ca0cba7373b2e1eefd1c5ea0d7669206302ef4bcfce52ccf3526" Jan 29 09:24:58 crc kubenswrapper[4771]: I0129 09:24:58.810720 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.004568 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-config-data\") pod \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.004721 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcmzm\" (UniqueName: \"kubernetes.io/projected/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-kube-api-access-zcmzm\") pod \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.004806 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-combined-ca-bundle\") pod \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\" (UID: \"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29\") " Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.016069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-kube-api-access-zcmzm" (OuterVolumeSpecName: "kube-api-access-zcmzm") pod "94fc0688-0a4c-48f4-83c6-2aa6bbfcde29" (UID: "94fc0688-0a4c-48f4-83c6-2aa6bbfcde29"). InnerVolumeSpecName "kube-api-access-zcmzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.036990 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94fc0688-0a4c-48f4-83c6-2aa6bbfcde29" (UID: "94fc0688-0a4c-48f4-83c6-2aa6bbfcde29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.050904 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-config-data" (OuterVolumeSpecName: "config-data") pod "94fc0688-0a4c-48f4-83c6-2aa6bbfcde29" (UID: "94fc0688-0a4c-48f4-83c6-2aa6bbfcde29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.108305 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.108341 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcmzm\" (UniqueName: \"kubernetes.io/projected/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-kube-api-access-zcmzm\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.108351 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.501557 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vrfbh" event={"ID":"94fc0688-0a4c-48f4-83c6-2aa6bbfcde29","Type":"ContainerDied","Data":"a1fa9b29be56f7eb1e070640079026ecc978d47ce65a3b58daadd8618cfcf088"} Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.501889 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1fa9b29be56f7eb1e070640079026ecc978d47ce65a3b58daadd8618cfcf088" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.501838 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vrfbh" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.503963 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" event={"ID":"cb0c00c1-1460-4ff7-ae82-03acda6d79ef","Type":"ContainerStarted","Data":"94294cdbc875ab287d28c6de6601e85bccb0b69d7fbff2f2948534e84d9d65eb"} Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.504888 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.528585 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" podStartSLOduration=3.528565429 podStartE2EDuration="3.528565429s" podCreationTimestamp="2026-01-29 09:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:24:59.523620114 +0000 UTC m=+1119.646460361" watchObservedRunningTime="2026-01-29 09:24:59.528565429 +0000 UTC m=+1119.651405656" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.776083 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-zc5rm"] Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.809870 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qd4ql"] Jan 29 09:24:59 crc kubenswrapper[4771]: E0129 09:24:59.810324 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fc0688-0a4c-48f4-83c6-2aa6bbfcde29" containerName="keystone-db-sync" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.810344 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fc0688-0a4c-48f4-83c6-2aa6bbfcde29" containerName="keystone-db-sync" Jan 29 09:24:59 crc kubenswrapper[4771]: E0129 09:24:59.810373 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543a7e6c-ab47-4720-b5f0-6b0800904d36" containerName="swift-ring-rebalance" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.810380 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="543a7e6c-ab47-4720-b5f0-6b0800904d36" containerName="swift-ring-rebalance" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.810574 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fc0688-0a4c-48f4-83c6-2aa6bbfcde29" containerName="keystone-db-sync" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.810593 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="543a7e6c-ab47-4720-b5f0-6b0800904d36" containerName="swift-ring-rebalance" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.811201 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.822127 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.822356 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.822497 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.822614 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.822797 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-497zd" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.837740 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-qvmc6"] Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.840213 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.864906 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qd4ql"] Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.890771 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-qvmc6"] Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.926395 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-scripts\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.926476 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-config-data\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.926568 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchmm\" (UniqueName: \"kubernetes.io/projected/d42c23a8-c6c5-4def-9c48-9d3320212293-kube-api-access-dchmm\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.926602 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-combined-ca-bundle\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.926667 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-fernet-keys\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:24:59 crc kubenswrapper[4771]: I0129 09:24:59.926689 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-credential-keys\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032114 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-scripts\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032207 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f87kn\" (UniqueName: \"kubernetes.io/projected/ae40a990-21b7-4463-b9b0-92fc34c270da-kube-api-access-f87kn\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032247 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-config-data\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032277 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-config\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032342 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032440 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032481 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dchmm\" (UniqueName: \"kubernetes.io/projected/d42c23a8-c6c5-4def-9c48-9d3320212293-kube-api-access-dchmm\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032528 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-combined-ca-bundle\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032553 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-dns-svc\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032611 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-fernet-keys\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.032647 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-credential-keys\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.048378 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-credential-keys\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.065251 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f9f46d67c-jvqxf"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.066964 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.072718 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-combined-ca-bundle\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.074352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-scripts\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.078287 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tfk8m" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.078502 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.078640 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.078880 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.083214 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-config-data\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.090732 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-fernet-keys\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.106515 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dchmm\" (UniqueName: \"kubernetes.io/projected/d42c23a8-c6c5-4def-9c48-9d3320212293-kube-api-access-dchmm\") pod \"keystone-bootstrap-qd4ql\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.124322 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-g2fcd"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.125663 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.138330 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.138436 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5rjkk" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.138540 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.139540 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.139644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.139692 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-dns-svc\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.139828 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f87kn\" (UniqueName: \"kubernetes.io/projected/ae40a990-21b7-4463-b9b0-92fc34c270da-kube-api-access-f87kn\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.139867 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-config\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.141174 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.141811 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.142466 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-dns-svc\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.144756 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-g2fcd"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.157830 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.161752 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-config\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.246586 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ab9c1d-3798-4151-bf0b-63227f0e45a4-etc-machine-id\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.246951 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b80061c9-e692-4bdd-8cf0-d9e9217d206d-horizon-secret-key\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.246990 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-scripts\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247022 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b80061c9-e692-4bdd-8cf0-d9e9217d206d-logs\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247095 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd46f\" (UniqueName: \"kubernetes.io/projected/b80061c9-e692-4bdd-8cf0-d9e9217d206d-kube-api-access-vd46f\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247158 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppsn5\" (UniqueName: \"kubernetes.io/projected/29ab9c1d-3798-4151-bf0b-63227f0e45a4-kube-api-access-ppsn5\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247179 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-config-data\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247213 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-scripts\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247235 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-db-sync-config-data\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247272 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-combined-ca-bundle\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247296 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-config-data\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.247482 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f9f46d67c-jvqxf"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.287467 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f87kn\" (UniqueName: \"kubernetes.io/projected/ae40a990-21b7-4463-b9b0-92fc34c270da-kube-api-access-f87kn\") pod \"dnsmasq-dns-7d5679f497-qvmc6\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.352957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd46f\" (UniqueName: \"kubernetes.io/projected/b80061c9-e692-4bdd-8cf0-d9e9217d206d-kube-api-access-vd46f\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppsn5\" (UniqueName: \"kubernetes.io/projected/29ab9c1d-3798-4151-bf0b-63227f0e45a4-kube-api-access-ppsn5\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353252 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-config-data\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353364 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-scripts\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353387 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-db-sync-config-data\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353492 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-combined-ca-bundle\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353540 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-config-data\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353606 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ab9c1d-3798-4151-bf0b-63227f0e45a4-etc-machine-id\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353640 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b80061c9-e692-4bdd-8cf0-d9e9217d206d-horizon-secret-key\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.353675 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-scripts\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.370080 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b80061c9-e692-4bdd-8cf0-d9e9217d206d-logs\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.419402 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-config-data\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.429577 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-db-sync-config-data\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.430599 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b80061c9-e692-4bdd-8cf0-d9e9217d206d-logs\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.431408 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-scripts\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.446100 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ab9c1d-3798-4151-bf0b-63227f0e45a4-etc-machine-id\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.448043 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-p2ghj"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.449861 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.483164 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b80061c9-e692-4bdd-8cf0-d9e9217d206d-horizon-secret-key\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.483667 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-combined-ca-bundle\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.484162 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.484506 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.484787 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-26mfg" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.486780 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppsn5\" (UniqueName: \"kubernetes.io/projected/29ab9c1d-3798-4151-bf0b-63227f0e45a4-kube-api-access-ppsn5\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.491993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-config-data\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.492447 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.493371 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-scripts\") pod \"cinder-db-sync-g2fcd\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.501270 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.514399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd46f\" (UniqueName: \"kubernetes.io/projected/b80061c9-e692-4bdd-8cf0-d9e9217d206d-kube-api-access-vd46f\") pod \"horizon-f9f46d67c-jvqxf\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.583482 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-p2ghj"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.608329 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.627800 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66cc8c846f-2txw5"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.629438 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.641807 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nq2\" (UniqueName: \"kubernetes.io/projected/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-kube-api-access-64nq2\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.641854 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-combined-ca-bundle\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.641957 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-config\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.652317 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bvppf"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.653865 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.663289 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.663528 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2tx8m" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.668873 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bvppf"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.678295 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66cc8c846f-2txw5"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.717816 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.746793 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-config-data\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.746837 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be50ed28-504a-4658-9e82-0e9d76c0a141-horizon-secret-key\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.746861 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-config\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.746908 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv2pn\" (UniqueName: \"kubernetes.io/projected/015c0ccc-d729-4d0a-8168-b897f1c451da-kube-api-access-nv2pn\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.746976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64nq2\" (UniqueName: \"kubernetes.io/projected/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-kube-api-access-64nq2\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.746994 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be50ed28-504a-4658-9e82-0e9d76c0a141-logs\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.747014 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-combined-ca-bundle\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.747048 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj5qw\" (UniqueName: \"kubernetes.io/projected/be50ed28-504a-4658-9e82-0e9d76c0a141-kube-api-access-vj5qw\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.747072 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-db-sync-config-data\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.747099 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-combined-ca-bundle\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.747120 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-scripts\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.754401 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-config\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.756778 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.757687 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.758183 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-combined-ca-bundle\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.772347 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.772675 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.773069 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.774784 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.797419 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.802611 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.802719 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.803216 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x86l6" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.833500 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64nq2\" (UniqueName: \"kubernetes.io/projected/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-kube-api-access-64nq2\") pod \"neutron-db-sync-p2ghj\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858145 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj5qw\" (UniqueName: \"kubernetes.io/projected/be50ed28-504a-4658-9e82-0e9d76c0a141-kube-api-access-vj5qw\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858196 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-db-sync-config-data\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858215 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858239 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-combined-ca-bundle\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858258 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-scripts\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858279 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858298 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn84q\" (UniqueName: \"kubernetes.io/projected/4e00d81c-fcca-450b-be4f-8a47b837b28e-kube-api-access-fn84q\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858315 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-scripts\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858334 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858351 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-config-data\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858367 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be50ed28-504a-4658-9e82-0e9d76c0a141-horizon-secret-key\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858392 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858408 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-config-data\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858423 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-logs\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858452 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-run-httpd\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858476 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv2pn\" (UniqueName: \"kubernetes.io/projected/015c0ccc-d729-4d0a-8168-b897f1c451da-kube-api-access-nv2pn\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858513 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858534 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858562 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-log-httpd\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858587 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858630 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be50ed28-504a-4658-9e82-0e9d76c0a141-logs\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.858648 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfnpt\" (UniqueName: \"kubernetes.io/projected/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-kube-api-access-cfnpt\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.862366 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-scripts\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.863289 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-config-data\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.864681 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be50ed28-504a-4658-9e82-0e9d76c0a141-logs\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.869023 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.879272 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-db-sync-config-data\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.881313 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be50ed28-504a-4658-9e82-0e9d76c0a141-horizon-secret-key\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.889427 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-combined-ca-bundle\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.921889 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8v8h9"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.923404 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.932544 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.932928 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gr4k5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.936557 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.952843 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj5qw\" (UniqueName: \"kubernetes.io/projected/be50ed28-504a-4658-9e82-0e9d76c0a141-kube-api-access-vj5qw\") pod \"horizon-66cc8c846f-2txw5\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.957549 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv2pn\" (UniqueName: \"kubernetes.io/projected/015c0ccc-d729-4d0a-8168-b897f1c451da-kube-api-access-nv2pn\") pod \"barbican-db-sync-bvppf\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.961960 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962013 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn84q\" (UniqueName: \"kubernetes.io/projected/4e00d81c-fcca-450b-be4f-8a47b837b28e-kube-api-access-fn84q\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962032 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-scripts\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962049 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962091 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-config-data\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962122 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-logs\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962148 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-run-httpd\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962193 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962212 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962242 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-log-httpd\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962364 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfnpt\" (UniqueName: \"kubernetes.io/projected/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-kube-api-access-cfnpt\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.962454 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.963932 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-qvmc6"] Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.965233 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.970068 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.971088 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-run-httpd\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.972252 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-log-httpd\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.973933 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.975811 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.982038 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.988387 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.993493 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-config-data\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.994042 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.994342 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-logs\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:00 crc kubenswrapper[4771]: I0129 09:25:00.996399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-scripts\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.006242 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.010468 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfnpt\" (UniqueName: \"kubernetes.io/projected/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-kube-api-access-cfnpt\") pod \"ceilometer-0\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " pod="openstack/ceilometer-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.013262 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8v8h9"] Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.020023 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.020398 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn84q\" (UniqueName: \"kubernetes.io/projected/4e00d81c-fcca-450b-be4f-8a47b837b28e-kube-api-access-fn84q\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.032803 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-lwtkw"] Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.034359 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.057116 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-lwtkw"] Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.061995 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.064794 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.064857 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-combined-ca-bundle\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.064881 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.064916 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzqg\" (UniqueName: \"kubernetes.io/projected/883872f0-cb88-4095-b918-b971d8c3c0b6-kube-api-access-mpzqg\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.064935 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-config-data\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.064959 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-config\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.065022 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-scripts\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.065043 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-dns-svc\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.065069 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wx9n\" (UniqueName: \"kubernetes.io/projected/82515d22-24f0-4673-ae59-bd788ca54a64-kube-api-access-7wx9n\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.065218 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883872f0-cb88-4095-b918-b971d8c3c0b6-logs\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.066579 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.068134 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.078422 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.101033 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.135676 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.148227 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.191870 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192278 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192437 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192468 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192536 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192576 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-scripts\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192643 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-dns-svc\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192727 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wx9n\" (UniqueName: \"kubernetes.io/projected/82515d22-24f0-4673-ae59-bd788ca54a64-kube-api-access-7wx9n\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192800 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.192957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883872f0-cb88-4095-b918-b971d8c3c0b6-logs\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.193022 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.193079 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-combined-ca-bundle\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.193102 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxt4m\" (UniqueName: \"kubernetes.io/projected/a2ec7096-d911-4727-ba80-f9ea52e53d47-kube-api-access-dxt4m\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.193136 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.200843 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpzqg\" (UniqueName: \"kubernetes.io/projected/883872f0-cb88-4095-b918-b971d8c3c0b6-kube-api-access-mpzqg\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.200925 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-config-data\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.200996 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-config\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.203905 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-dns-svc\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.212906 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.212923 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-config\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.213303 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883872f0-cb88-4095-b918-b971d8c3c0b6-logs\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.233248 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-scripts\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.238099 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-config-data\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.245510 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wx9n\" (UniqueName: \"kubernetes.io/projected/82515d22-24f0-4673-ae59-bd788ca54a64-kube-api-access-7wx9n\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.252961 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-lwtkw\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.270799 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-combined-ca-bundle\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.295932 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpzqg\" (UniqueName: \"kubernetes.io/projected/883872f0-cb88-4095-b918-b971d8c3c0b6-kube-api-access-mpzqg\") pod \"placement-db-sync-8v8h9\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.317328 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.317407 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.317559 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.317582 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.317620 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.317731 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.317896 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxt4m\" (UniqueName: \"kubernetes.io/projected/a2ec7096-d911-4727-ba80-f9ea52e53d47-kube-api-access-dxt4m\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.325625 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.332193 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.333729 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.344623 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.363158 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxt4m\" (UniqueName: \"kubernetes.io/projected/a2ec7096-d911-4727-ba80-f9ea52e53d47-kube-api-access-dxt4m\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.373236 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.377349 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.423736 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.478631 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qd4ql"] Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.568223 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.593076 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.604688 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" podUID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" containerName="dnsmasq-dns" containerID="cri-o://94294cdbc875ab287d28c6de6601e85bccb0b69d7fbff2f2948534e84d9d65eb" gracePeriod=10 Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.607523 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qd4ql" event={"ID":"d42c23a8-c6c5-4def-9c48-9d3320212293","Type":"ContainerStarted","Data":"e43551a3e96ba4b31565ef9efb4456a413b1d929069c6286552839828e336a1f"} Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.622841 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:01 crc kubenswrapper[4771]: I0129 09:25:01.810376 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-g2fcd"] Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.028432 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f9f46d67c-jvqxf"] Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.041290 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-p2ghj"] Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.060852 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-qvmc6"] Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.187042 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66cc8c846f-2txw5"] Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.231780 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bvppf"] Jan 29 09:25:02 crc kubenswrapper[4771]: W0129 09:25:02.243908 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod015c0ccc_d729_4d0a_8168_b897f1c451da.slice/crio-fa6f826cd0dde214f8f237d517d10a72f9b06e19fff8877612079583244df808 WatchSource:0}: Error finding container fa6f826cd0dde214f8f237d517d10a72f9b06e19fff8877612079583244df808: Status 404 returned error can't find the container with id fa6f826cd0dde214f8f237d517d10a72f9b06e19fff8877612079583244df808 Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.382415 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:25:02 crc kubenswrapper[4771]: W0129 09:25:02.391028 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac1a3ad8_59de_4f46_83b0_6886b1ef4ffe.slice/crio-c58f60ef5e1e947a87cc94291df2411997e935a024d84fd483b011a92fdc5e13 WatchSource:0}: Error finding container c58f60ef5e1e947a87cc94291df2411997e935a024d84fd483b011a92fdc5e13: Status 404 returned error can't find the container with id c58f60ef5e1e947a87cc94291df2411997e935a024d84fd483b011a92fdc5e13 Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.391680 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8v8h9"] Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.575583 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.617169 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p2ghj" event={"ID":"5de5b9ec-6c6b-4e51-a053-d0076c2c729e","Type":"ContainerStarted","Data":"55501460cf0282477722afa8e500e0d6d7b62e6e50b5248a0e1e2c83b00aebc4"} Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.623340 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-lwtkw"] Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.623551 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8v8h9" event={"ID":"883872f0-cb88-4095-b918-b971d8c3c0b6","Type":"ContainerStarted","Data":"7c4e5b2427cc5ed42a0dff5871e7d277e2bbd147c23d77dd19e5ea672c788cd2"} Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.625240 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9f46d67c-jvqxf" event={"ID":"b80061c9-e692-4bdd-8cf0-d9e9217d206d","Type":"ContainerStarted","Data":"454de75d92ae4da8ab5e11e788fe61ad492c43f2e07c7db425845c6657abef25"} Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.627397 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66cc8c846f-2txw5" event={"ID":"be50ed28-504a-4658-9e82-0e9d76c0a141","Type":"ContainerStarted","Data":"78f40a8b2bec5680ecf08f7dfa05e81a3394d54e78455c6482a3f4d1a03409de"} Jan 29 09:25:02 crc kubenswrapper[4771]: W0129 09:25:02.628016 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e00d81c_fcca_450b_be4f_8a47b837b28e.slice/crio-7539e2fe0ac61a0c1a3afea146e8ed934ca235ff4cd3b8e74202c7cdf91f8538 WatchSource:0}: Error finding container 7539e2fe0ac61a0c1a3afea146e8ed934ca235ff4cd3b8e74202c7cdf91f8538: Status 404 returned error can't find the container with id 7539e2fe0ac61a0c1a3afea146e8ed934ca235ff4cd3b8e74202c7cdf91f8538 Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.629722 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qd4ql" event={"ID":"d42c23a8-c6c5-4def-9c48-9d3320212293","Type":"ContainerStarted","Data":"210faa8939b1d3439762d9c1ade0884e7cddf35d1eccb277509739a4e9a09e15"} Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.641651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bvppf" event={"ID":"015c0ccc-d729-4d0a-8168-b897f1c451da","Type":"ContainerStarted","Data":"fa6f826cd0dde214f8f237d517d10a72f9b06e19fff8877612079583244df808"} Jan 29 09:25:02 crc kubenswrapper[4771]: W0129 09:25:02.643258 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82515d22_24f0_4673_ae59_bd788ca54a64.slice/crio-d1fa4f70527aed561fab52e58ac200ae0ff5e9d3b5c698bcb2f025e740dbc129 WatchSource:0}: Error finding container d1fa4f70527aed561fab52e58ac200ae0ff5e9d3b5c698bcb2f025e740dbc129: Status 404 returned error can't find the container with id d1fa4f70527aed561fab52e58ac200ae0ff5e9d3b5c698bcb2f025e740dbc129 Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.645219 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g2fcd" event={"ID":"29ab9c1d-3798-4151-bf0b-63227f0e45a4","Type":"ContainerStarted","Data":"6ea5a32a03f8c453e896fd47e7173200e7c9cac4f9b0a13e245ef5530681beb6"} Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.647661 4771 generic.go:334] "Generic (PLEG): container finished" podID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" containerID="94294cdbc875ab287d28c6de6601e85bccb0b69d7fbff2f2948534e84d9d65eb" exitCode=0 Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.647717 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" event={"ID":"cb0c00c1-1460-4ff7-ae82-03acda6d79ef","Type":"ContainerDied","Data":"94294cdbc875ab287d28c6de6601e85bccb0b69d7fbff2f2948534e84d9d65eb"} Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.649291 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" event={"ID":"ae40a990-21b7-4463-b9b0-92fc34c270da","Type":"ContainerStarted","Data":"84e5f9c0ecdd2a8ea168fa0daac31cf26f1db531f560179331d895f1e46b461a"} Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.650829 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe","Type":"ContainerStarted","Data":"c58f60ef5e1e947a87cc94291df2411997e935a024d84fd483b011a92fdc5e13"} Jan 29 09:25:02 crc kubenswrapper[4771]: I0129 09:25:02.732969 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.427345 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f9f46d67c-jvqxf"] Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.531773 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.565076 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-568bc96dfc-zdb8c"] Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.567451 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.599877 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.656512 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-568bc96dfc-zdb8c"] Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.697093 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba80d5e-ee59-4305-974f-7645a09de37b-horizon-secret-key\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.697226 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba80d5e-ee59-4305-974f-7645a09de37b-logs\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.697334 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-config-data\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.697367 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8gk4\" (UniqueName: \"kubernetes.io/projected/aba80d5e-ee59-4305-974f-7645a09de37b-kube-api-access-s8gk4\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.697399 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-scripts\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.709736 4771 generic.go:334] "Generic (PLEG): container finished" podID="ae40a990-21b7-4463-b9b0-92fc34c270da" containerID="8a2109fc8ff8e62388b0684e8155e04018949d6e9a8b3ed4c5b8aefad09cff78" exitCode=0 Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.709973 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" event={"ID":"ae40a990-21b7-4463-b9b0-92fc34c270da","Type":"ContainerDied","Data":"8a2109fc8ff8e62388b0684e8155e04018949d6e9a8b3ed4c5b8aefad09cff78"} Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.730007 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e00d81c-fcca-450b-be4f-8a47b837b28e","Type":"ContainerStarted","Data":"7539e2fe0ac61a0c1a3afea146e8ed934ca235ff4cd3b8e74202c7cdf91f8538"} Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.767968 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" event={"ID":"82515d22-24f0-4673-ae59-bd788ca54a64","Type":"ContainerStarted","Data":"2f8bf43571910b0facc40d2db89513ca5af9aa16c346c96844fba970de57499d"} Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.768019 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" event={"ID":"82515d22-24f0-4673-ae59-bd788ca54a64","Type":"ContainerStarted","Data":"d1fa4f70527aed561fab52e58ac200ae0ff5e9d3b5c698bcb2f025e740dbc129"} Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.800906 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba80d5e-ee59-4305-974f-7645a09de37b-horizon-secret-key\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.801013 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba80d5e-ee59-4305-974f-7645a09de37b-logs\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.801099 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-config-data\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.801127 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8gk4\" (UniqueName: \"kubernetes.io/projected/aba80d5e-ee59-4305-974f-7645a09de37b-kube-api-access-s8gk4\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.801154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-scripts\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.801968 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p2ghj" event={"ID":"5de5b9ec-6c6b-4e51-a053-d0076c2c729e","Type":"ContainerStarted","Data":"02113350a64fb52a1a50a806562328d66a7f36ada03fabf6d1498be3bd7e7929"} Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.801995 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-scripts\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.803495 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba80d5e-ee59-4305-974f-7645a09de37b-logs\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.804727 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-config-data\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.814505 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba80d5e-ee59-4305-974f-7645a09de37b-horizon-secret-key\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.845476 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8gk4\" (UniqueName: \"kubernetes.io/projected/aba80d5e-ee59-4305-974f-7645a09de37b-kube-api-access-s8gk4\") pod \"horizon-568bc96dfc-zdb8c\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.852255 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2ec7096-d911-4727-ba80-f9ea52e53d47","Type":"ContainerStarted","Data":"add34908cfd5f879b25a8a552490a44134c5f956b2b91e6e097f427a6de4595a"} Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.929486 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.941516 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:03 crc kubenswrapper[4771]: I0129 09:25:03.994546 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qd4ql" podStartSLOduration=4.99452282 podStartE2EDuration="4.99452282s" podCreationTimestamp="2026-01-29 09:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:03.941428679 +0000 UTC m=+1124.064268916" watchObservedRunningTime="2026-01-29 09:25:03.99452282 +0000 UTC m=+1124.117363057" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.059254 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-p2ghj" podStartSLOduration=4.059229618 podStartE2EDuration="4.059229618s" podCreationTimestamp="2026-01-29 09:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:03.987355524 +0000 UTC m=+1124.110195751" watchObservedRunningTime="2026-01-29 09:25:04.059229618 +0000 UTC m=+1124.182069845" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.396226 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.423014 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-dns-svc\") pod \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.423283 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbl56\" (UniqueName: \"kubernetes.io/projected/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-kube-api-access-fbl56\") pod \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.423341 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-sb\") pod \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.423370 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-nb\") pod \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.423424 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-config\") pod \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\" (UID: \"cb0c00c1-1460-4ff7-ae82-03acda6d79ef\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.430985 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-kube-api-access-fbl56" (OuterVolumeSpecName: "kube-api-access-fbl56") pod "cb0c00c1-1460-4ff7-ae82-03acda6d79ef" (UID: "cb0c00c1-1460-4ff7-ae82-03acda6d79ef"). InnerVolumeSpecName "kube-api-access-fbl56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.465753 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.525919 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-nb\") pod \"ae40a990-21b7-4463-b9b0-92fc34c270da\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.526019 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f87kn\" (UniqueName: \"kubernetes.io/projected/ae40a990-21b7-4463-b9b0-92fc34c270da-kube-api-access-f87kn\") pod \"ae40a990-21b7-4463-b9b0-92fc34c270da\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.526072 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-sb\") pod \"ae40a990-21b7-4463-b9b0-92fc34c270da\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.526181 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-dns-svc\") pod \"ae40a990-21b7-4463-b9b0-92fc34c270da\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.526263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-config\") pod \"ae40a990-21b7-4463-b9b0-92fc34c270da\" (UID: \"ae40a990-21b7-4463-b9b0-92fc34c270da\") " Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.526976 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbl56\" (UniqueName: \"kubernetes.io/projected/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-kube-api-access-fbl56\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.549118 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae40a990-21b7-4463-b9b0-92fc34c270da-kube-api-access-f87kn" (OuterVolumeSpecName: "kube-api-access-f87kn") pod "ae40a990-21b7-4463-b9b0-92fc34c270da" (UID: "ae40a990-21b7-4463-b9b0-92fc34c270da"). InnerVolumeSpecName "kube-api-access-f87kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.570750 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb0c00c1-1460-4ff7-ae82-03acda6d79ef" (UID: "cb0c00c1-1460-4ff7-ae82-03acda6d79ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.597215 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb0c00c1-1460-4ff7-ae82-03acda6d79ef" (UID: "cb0c00c1-1460-4ff7-ae82-03acda6d79ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.615438 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae40a990-21b7-4463-b9b0-92fc34c270da" (UID: "ae40a990-21b7-4463-b9b0-92fc34c270da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.621548 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae40a990-21b7-4463-b9b0-92fc34c270da" (UID: "ae40a990-21b7-4463-b9b0-92fc34c270da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.630464 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.630507 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f87kn\" (UniqueName: \"kubernetes.io/projected/ae40a990-21b7-4463-b9b0-92fc34c270da-kube-api-access-f87kn\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.630523 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.630536 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.630547 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.631466 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-config" (OuterVolumeSpecName: "config") pod "cb0c00c1-1460-4ff7-ae82-03acda6d79ef" (UID: "cb0c00c1-1460-4ff7-ae82-03acda6d79ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.637579 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb0c00c1-1460-4ff7-ae82-03acda6d79ef" (UID: "cb0c00c1-1460-4ff7-ae82-03acda6d79ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.643480 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-568bc96dfc-zdb8c"] Jan 29 09:25:04 crc kubenswrapper[4771]: W0129 09:25:04.647666 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaba80d5e_ee59_4305_974f_7645a09de37b.slice/crio-4279a017ea2785166eee2af636a5291b645752bbe9a6d0325307cb608002dc62 WatchSource:0}: Error finding container 4279a017ea2785166eee2af636a5291b645752bbe9a6d0325307cb608002dc62: Status 404 returned error can't find the container with id 4279a017ea2785166eee2af636a5291b645752bbe9a6d0325307cb608002dc62 Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.655351 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-config" (OuterVolumeSpecName: "config") pod "ae40a990-21b7-4463-b9b0-92fc34c270da" (UID: "ae40a990-21b7-4463-b9b0-92fc34c270da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.674058 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae40a990-21b7-4463-b9b0-92fc34c270da" (UID: "ae40a990-21b7-4463-b9b0-92fc34c270da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.734303 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.734351 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.734369 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c00c1-1460-4ff7-ae82-03acda6d79ef-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.734382 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae40a990-21b7-4463-b9b0-92fc34c270da-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.898028 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2ec7096-d911-4727-ba80-f9ea52e53d47","Type":"ContainerStarted","Data":"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd"} Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.901564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-568bc96dfc-zdb8c" event={"ID":"aba80d5e-ee59-4305-974f-7645a09de37b","Type":"ContainerStarted","Data":"4279a017ea2785166eee2af636a5291b645752bbe9a6d0325307cb608002dc62"} Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.911385 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" event={"ID":"ae40a990-21b7-4463-b9b0-92fc34c270da","Type":"ContainerDied","Data":"84e5f9c0ecdd2a8ea168fa0daac31cf26f1db531f560179331d895f1e46b461a"} Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.911443 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-qvmc6" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.911458 4771 scope.go:117] "RemoveContainer" containerID="8a2109fc8ff8e62388b0684e8155e04018949d6e9a8b3ed4c5b8aefad09cff78" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.924662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e00d81c-fcca-450b-be4f-8a47b837b28e","Type":"ContainerStarted","Data":"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb"} Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.965207 4771 generic.go:334] "Generic (PLEG): container finished" podID="82515d22-24f0-4673-ae59-bd788ca54a64" containerID="2f8bf43571910b0facc40d2db89513ca5af9aa16c346c96844fba970de57499d" exitCode=0 Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.965365 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" event={"ID":"82515d22-24f0-4673-ae59-bd788ca54a64","Type":"ContainerDied","Data":"2f8bf43571910b0facc40d2db89513ca5af9aa16c346c96844fba970de57499d"} Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.972832 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-qvmc6"] Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.973674 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.974459 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-zc5rm" event={"ID":"cb0c00c1-1460-4ff7-ae82-03acda6d79ef","Type":"ContainerDied","Data":"7f0b9e0424a5281e95ab5db6143e4e57aeb9e8a2374f75a658460090c0bd8fdd"} Jan 29 09:25:04 crc kubenswrapper[4771]: I0129 09:25:04.985329 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-qvmc6"] Jan 29 09:25:05 crc kubenswrapper[4771]: I0129 09:25:05.035969 4771 scope.go:117] "RemoveContainer" containerID="94294cdbc875ab287d28c6de6601e85bccb0b69d7fbff2f2948534e84d9d65eb" Jan 29 09:25:05 crc kubenswrapper[4771]: I0129 09:25:05.072577 4771 scope.go:117] "RemoveContainer" containerID="a58252355259db6d970171fe7913edc09331545313fde451112668de56008d89" Jan 29 09:25:05 crc kubenswrapper[4771]: I0129 09:25:05.106881 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-zc5rm"] Jan 29 09:25:05 crc kubenswrapper[4771]: I0129 09:25:05.120886 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-zc5rm"] Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.013828 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e00d81c-fcca-450b-be4f-8a47b837b28e","Type":"ContainerStarted","Data":"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06"} Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.013977 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerName="glance-log" containerID="cri-o://15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb" gracePeriod=30 Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.014245 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerName="glance-httpd" containerID="cri-o://2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06" gracePeriod=30 Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.032311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" event={"ID":"82515d22-24f0-4673-ae59-bd788ca54a64","Type":"ContainerStarted","Data":"750c7ff61470a27e421526514059d66ee5bb37bbc69f92666e71fbe1901c4c30"} Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.032482 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.036001 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2ec7096-d911-4727-ba80-f9ea52e53d47","Type":"ContainerStarted","Data":"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2"} Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.036146 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerName="glance-log" containerID="cri-o://e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd" gracePeriod=30 Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.036488 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerName="glance-httpd" containerID="cri-o://799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2" gracePeriod=30 Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.046269 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.046230743 podStartE2EDuration="6.046230743s" podCreationTimestamp="2026-01-29 09:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:06.042317796 +0000 UTC m=+1126.165158013" watchObservedRunningTime="2026-01-29 09:25:06.046230743 +0000 UTC m=+1126.169070970" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.071853 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" podStartSLOduration=6.071835243 podStartE2EDuration="6.071835243s" podCreationTimestamp="2026-01-29 09:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:06.070400653 +0000 UTC m=+1126.193240890" watchObservedRunningTime="2026-01-29 09:25:06.071835243 +0000 UTC m=+1126.194675470" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.099629 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.099606511 podStartE2EDuration="6.099606511s" podCreationTimestamp="2026-01-29 09:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:06.091019087 +0000 UTC m=+1126.213859324" watchObservedRunningTime="2026-01-29 09:25:06.099606511 +0000 UTC m=+1126.222446728" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.810136 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.830185 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn84q\" (UniqueName: \"kubernetes.io/projected/4e00d81c-fcca-450b-be4f-8a47b837b28e-kube-api-access-fn84q\") pod \"4e00d81c-fcca-450b-be4f-8a47b837b28e\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.830264 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-httpd-run\") pod \"4e00d81c-fcca-450b-be4f-8a47b837b28e\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.830307 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-config-data\") pod \"4e00d81c-fcca-450b-be4f-8a47b837b28e\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.830517 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-scripts\") pod \"4e00d81c-fcca-450b-be4f-8a47b837b28e\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.830575 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4e00d81c-fcca-450b-be4f-8a47b837b28e\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.831210 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-logs\") pod \"4e00d81c-fcca-450b-be4f-8a47b837b28e\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.831252 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-combined-ca-bundle\") pod \"4e00d81c-fcca-450b-be4f-8a47b837b28e\" (UID: \"4e00d81c-fcca-450b-be4f-8a47b837b28e\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.833409 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4e00d81c-fcca-450b-be4f-8a47b837b28e" (UID: "4e00d81c-fcca-450b-be4f-8a47b837b28e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.838030 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-logs" (OuterVolumeSpecName: "logs") pod "4e00d81c-fcca-450b-be4f-8a47b837b28e" (UID: "4e00d81c-fcca-450b-be4f-8a47b837b28e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.868799 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-scripts" (OuterVolumeSpecName: "scripts") pod "4e00d81c-fcca-450b-be4f-8a47b837b28e" (UID: "4e00d81c-fcca-450b-be4f-8a47b837b28e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.868924 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "4e00d81c-fcca-450b-be4f-8a47b837b28e" (UID: "4e00d81c-fcca-450b-be4f-8a47b837b28e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.871909 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e00d81c-fcca-450b-be4f-8a47b837b28e-kube-api-access-fn84q" (OuterVolumeSpecName: "kube-api-access-fn84q") pod "4e00d81c-fcca-450b-be4f-8a47b837b28e" (UID: "4e00d81c-fcca-450b-be4f-8a47b837b28e"). InnerVolumeSpecName "kube-api-access-fn84q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.872923 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae40a990-21b7-4463-b9b0-92fc34c270da" path="/var/lib/kubelet/pods/ae40a990-21b7-4463-b9b0-92fc34c270da/volumes" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.875958 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" path="/var/lib/kubelet/pods/cb0c00c1-1460-4ff7-ae82-03acda6d79ef/volumes" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.892457 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e00d81c-fcca-450b-be4f-8a47b837b28e" (UID: "4e00d81c-fcca-450b-be4f-8a47b837b28e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.918819 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.932709 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-config-data\") pod \"a2ec7096-d911-4727-ba80-f9ea52e53d47\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.932750 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-combined-ca-bundle\") pod \"a2ec7096-d911-4727-ba80-f9ea52e53d47\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.932877 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-scripts\") pod \"a2ec7096-d911-4727-ba80-f9ea52e53d47\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.932977 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"a2ec7096-d911-4727-ba80-f9ea52e53d47\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933037 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxt4m\" (UniqueName: \"kubernetes.io/projected/a2ec7096-d911-4727-ba80-f9ea52e53d47-kube-api-access-dxt4m\") pod \"a2ec7096-d911-4727-ba80-f9ea52e53d47\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933075 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-httpd-run\") pod \"a2ec7096-d911-4727-ba80-f9ea52e53d47\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933096 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-logs\") pod \"a2ec7096-d911-4727-ba80-f9ea52e53d47\" (UID: \"a2ec7096-d911-4727-ba80-f9ea52e53d47\") " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933602 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933629 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933641 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933651 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933663 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn84q\" (UniqueName: \"kubernetes.io/projected/4e00d81c-fcca-450b-be4f-8a47b837b28e-kube-api-access-fn84q\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.933671 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e00d81c-fcca-450b-be4f-8a47b837b28e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.941875 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a2ec7096-d911-4727-ba80-f9ea52e53d47" (UID: "a2ec7096-d911-4727-ba80-f9ea52e53d47"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.944334 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-logs" (OuterVolumeSpecName: "logs") pod "a2ec7096-d911-4727-ba80-f9ea52e53d47" (UID: "a2ec7096-d911-4727-ba80-f9ea52e53d47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.947088 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-scripts" (OuterVolumeSpecName: "scripts") pod "a2ec7096-d911-4727-ba80-f9ea52e53d47" (UID: "a2ec7096-d911-4727-ba80-f9ea52e53d47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.965720 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "a2ec7096-d911-4727-ba80-f9ea52e53d47" (UID: "a2ec7096-d911-4727-ba80-f9ea52e53d47"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.970213 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ec7096-d911-4727-ba80-f9ea52e53d47-kube-api-access-dxt4m" (OuterVolumeSpecName: "kube-api-access-dxt4m") pod "a2ec7096-d911-4727-ba80-f9ea52e53d47" (UID: "a2ec7096-d911-4727-ba80-f9ea52e53d47"). InnerVolumeSpecName "kube-api-access-dxt4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:06 crc kubenswrapper[4771]: I0129 09:25:06.993314 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.017417 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-config-data" (OuterVolumeSpecName: "config-data") pod "4e00d81c-fcca-450b-be4f-8a47b837b28e" (UID: "4e00d81c-fcca-450b-be4f-8a47b837b28e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.022868 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2ec7096-d911-4727-ba80-f9ea52e53d47" (UID: "a2ec7096-d911-4727-ba80-f9ea52e53d47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.039351 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.039388 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00d81c-fcca-450b-be4f-8a47b837b28e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.039400 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxt4m\" (UniqueName: \"kubernetes.io/projected/a2ec7096-d911-4727-ba80-f9ea52e53d47-kube-api-access-dxt4m\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.039410 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.039419 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2ec7096-d911-4727-ba80-f9ea52e53d47-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.039427 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.039436 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.039445 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.048884 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-config-data" (OuterVolumeSpecName: "config-data") pod "a2ec7096-d911-4727-ba80-f9ea52e53d47" (UID: "a2ec7096-d911-4727-ba80-f9ea52e53d47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.057342 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.072643 4771 generic.go:334] "Generic (PLEG): container finished" podID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerID="799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2" exitCode=0 Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.072734 4771 generic.go:334] "Generic (PLEG): container finished" podID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerID="e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd" exitCode=143 Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.072760 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.072859 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2ec7096-d911-4727-ba80-f9ea52e53d47","Type":"ContainerDied","Data":"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2"} Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.072913 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2ec7096-d911-4727-ba80-f9ea52e53d47","Type":"ContainerDied","Data":"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd"} Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.072928 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2ec7096-d911-4727-ba80-f9ea52e53d47","Type":"ContainerDied","Data":"add34908cfd5f879b25a8a552490a44134c5f956b2b91e6e097f427a6de4595a"} Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.072948 4771 scope.go:117] "RemoveContainer" containerID="799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.080348 4771 generic.go:334] "Generic (PLEG): container finished" podID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerID="2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06" exitCode=143 Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.080419 4771 generic.go:334] "Generic (PLEG): container finished" podID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerID="15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb" exitCode=143 Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.082206 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.088054 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e00d81c-fcca-450b-be4f-8a47b837b28e","Type":"ContainerDied","Data":"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06"} Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.088333 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e00d81c-fcca-450b-be4f-8a47b837b28e","Type":"ContainerDied","Data":"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb"} Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.088356 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e00d81c-fcca-450b-be4f-8a47b837b28e","Type":"ContainerDied","Data":"7539e2fe0ac61a0c1a3afea146e8ed934ca235ff4cd3b8e74202c7cdf91f8538"} Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.142236 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ec7096-d911-4727-ba80-f9ea52e53d47-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.142790 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.156774 4771 scope.go:117] "RemoveContainer" containerID="e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.167708 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.204791 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.225910 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.261998 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.279854 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.280413 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" containerName="dnsmasq-dns" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280436 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" containerName="dnsmasq-dns" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.280463 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerName="glance-httpd" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280471 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerName="glance-httpd" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.280486 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerName="glance-log" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280493 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerName="glance-log" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.280509 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae40a990-21b7-4463-b9b0-92fc34c270da" containerName="init" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280516 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae40a990-21b7-4463-b9b0-92fc34c270da" containerName="init" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.280536 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerName="glance-log" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280542 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerName="glance-log" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.280551 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" containerName="init" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280558 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" containerName="init" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.280578 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerName="glance-httpd" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280586 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerName="glance-httpd" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280889 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerName="glance-httpd" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280906 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerName="glance-httpd" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280920 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" containerName="glance-log" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280931 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" containerName="glance-log" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280942 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae40a990-21b7-4463-b9b0-92fc34c270da" containerName="init" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.280957 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0c00c1-1460-4ff7-ae82-03acda6d79ef" containerName="dnsmasq-dns" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.282026 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.286155 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x86l6" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.286580 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.286805 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.324018 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.330157 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.338873 4771 scope.go:117] "RemoveContainer" containerID="799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.346309 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.348885 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2\": container with ID starting with 799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2 not found: ID does not exist" containerID="799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.356651 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2"} err="failed to get container status \"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2\": rpc error: code = NotFound desc = could not find container \"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2\": container with ID starting with 799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2 not found: ID does not exist" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.356897 4771 scope.go:117] "RemoveContainer" containerID="e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.380254 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd\": container with ID starting with e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd not found: ID does not exist" containerID="e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.380335 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd"} err="failed to get container status \"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd\": rpc error: code = NotFound desc = could not find container \"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd\": container with ID starting with e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd not found: ID does not exist" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.380370 4771 scope.go:117] "RemoveContainer" containerID="799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.382330 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2"} err="failed to get container status \"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2\": rpc error: code = NotFound desc = could not find container \"799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2\": container with ID starting with 799cdc487f9a6e16ad5b58df94d56a26f18efca8377cce1ee95289094d1b5ae2 not found: ID does not exist" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.382493 4771 scope.go:117] "RemoveContainer" containerID="e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.399061 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd"} err="failed to get container status \"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd\": rpc error: code = NotFound desc = could not find container \"e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd\": container with ID starting with e7f5b022cf7f1d3287489f300b97fbc9525bf1639fa187d8e6cc2be4b746a8bd not found: ID does not exist" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.399359 4771 scope.go:117] "RemoveContainer" containerID="2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.399844 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.451864 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.470340 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-logs\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.470606 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.470685 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.470768 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnljk\" (UniqueName: \"kubernetes.io/projected/781b98c1-28a5-455a-a1d3-4bd92e1692fc-kube-api-access-hnljk\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.470814 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.470903 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.471082 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.471132 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.471199 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.471232 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.471284 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-logs\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.471316 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.471354 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.471381 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrgs4\" (UniqueName: \"kubernetes.io/projected/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-kube-api-access-qrgs4\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.513396 4771 scope.go:117] "RemoveContainer" containerID="15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574031 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnljk\" (UniqueName: \"kubernetes.io/projected/781b98c1-28a5-455a-a1d3-4bd92e1692fc-kube-api-access-hnljk\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574107 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574166 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574243 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574271 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574305 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574328 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574357 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-logs\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574382 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574412 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574436 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrgs4\" (UniqueName: \"kubernetes.io/projected/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-kube-api-access-qrgs4\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574470 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-logs\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574541 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.574575 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.575196 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.575888 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.577331 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-logs\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.579888 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-logs\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.580751 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.580768 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.605872 4771 scope.go:117] "RemoveContainer" containerID="2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.607768 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrgs4\" (UniqueName: \"kubernetes.io/projected/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-kube-api-access-qrgs4\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.610608 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.611347 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.612017 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.613755 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.615138 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06\": container with ID starting with 2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06 not found: ID does not exist" containerID="2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.615184 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06"} err="failed to get container status \"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06\": rpc error: code = NotFound desc = could not find container \"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06\": container with ID starting with 2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06 not found: ID does not exist" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.615223 4771 scope.go:117] "RemoveContainer" containerID="15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.616761 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: E0129 09:25:07.617010 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb\": container with ID starting with 15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb not found: ID does not exist" containerID="15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.617118 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb"} err="failed to get container status \"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb\": rpc error: code = NotFound desc = could not find container \"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb\": container with ID starting with 15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb not found: ID does not exist" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.617218 4771 scope.go:117] "RemoveContainer" containerID="2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.621823 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnljk\" (UniqueName: \"kubernetes.io/projected/781b98c1-28a5-455a-a1d3-4bd92e1692fc-kube-api-access-hnljk\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.622013 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06"} err="failed to get container status \"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06\": rpc error: code = NotFound desc = could not find container \"2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06\": container with ID starting with 2dfc1ebb49d06ece0d3bf1dc55a79f046772876fa6e609303c4566c097a98e06 not found: ID does not exist" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.622061 4771 scope.go:117] "RemoveContainer" containerID="15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.622676 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb"} err="failed to get container status \"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb\": rpc error: code = NotFound desc = could not find container \"15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb\": container with ID starting with 15925114e506207deca545e62564ec5602f1d18ea755fb8043f6659d73e2b6eb not found: ID does not exist" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.659036 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.681821 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.709291 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.713679 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.724590 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.830632 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:07 crc kubenswrapper[4771]: I0129 09:25:07.920414 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:08 crc kubenswrapper[4771]: I0129 09:25:08.574840 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:08 crc kubenswrapper[4771]: I0129 09:25:08.852646 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e00d81c-fcca-450b-be4f-8a47b837b28e" path="/var/lib/kubelet/pods/4e00d81c-fcca-450b-be4f-8a47b837b28e/volumes" Jan 29 09:25:08 crc kubenswrapper[4771]: I0129 09:25:08.854341 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ec7096-d911-4727-ba80-f9ea52e53d47" path="/var/lib/kubelet/pods/a2ec7096-d911-4727-ba80-f9ea52e53d47/volumes" Jan 29 09:25:09 crc kubenswrapper[4771]: I0129 09:25:09.195033 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"781b98c1-28a5-455a-a1d3-4bd92e1692fc","Type":"ContainerStarted","Data":"1feedc6b892f385bd6d399429d63f7a7d43c3efbd49b856ab27a81bfc6feb7a7"} Jan 29 09:25:09 crc kubenswrapper[4771]: I0129 09:25:09.244330 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.134731 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66cc8c846f-2txw5"] Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.187818 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d5dc7fbb8-8h9gn"] Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.228150 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.244950 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d5dc7fbb8-8h9gn"] Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.247919 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.276343 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cedb34-7c52-47ec-8f60-5d3e362f5948-logs\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.276404 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbz4g\" (UniqueName: \"kubernetes.io/projected/55cedb34-7c52-47ec-8f60-5d3e362f5948-kube-api-access-gbz4g\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.276501 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-config-data\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.276564 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-combined-ca-bundle\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.276637 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-tls-certs\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.277644 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-secret-key\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.277687 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-scripts\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.311887 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"781b98c1-28a5-455a-a1d3-4bd92e1692fc","Type":"ContainerStarted","Data":"3464b853804f9d653f2579bed2e2c7dfda358fe4dc70c2f28df3b0e0e129e257"} Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.362245 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-568bc96dfc-zdb8c"] Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.380191 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-combined-ca-bundle\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.380279 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-tls-certs\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.380328 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-secret-key\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.380346 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-scripts\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.380391 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cedb34-7c52-47ec-8f60-5d3e362f5948-logs\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.380410 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbz4g\" (UniqueName: \"kubernetes.io/projected/55cedb34-7c52-47ec-8f60-5d3e362f5948-kube-api-access-gbz4g\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.380487 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-config-data\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.381894 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cedb34-7c52-47ec-8f60-5d3e362f5948-logs\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.382384 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-scripts\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.383588 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-config-data\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.391269 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67d9579b5b-l9trm"] Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.393458 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.394451 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-combined-ca-bundle\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.395072 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-tls-certs\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.427386 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67d9579b5b-l9trm"] Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.433111 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbz4g\" (UniqueName: \"kubernetes.io/projected/55cedb34-7c52-47ec-8f60-5d3e362f5948-kube-api-access-gbz4g\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.466520 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-secret-key\") pod \"horizon-6d5dc7fbb8-8h9gn\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.494321 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-combined-ca-bundle\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.494399 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-horizon-secret-key\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.494550 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d093a30-424c-4a0c-a749-7a47328c4b2d-config-data\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.494899 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d093a30-424c-4a0c-a749-7a47328c4b2d-logs\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.494970 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-horizon-tls-certs\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.495029 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d093a30-424c-4a0c-a749-7a47328c4b2d-scripts\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.495062 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m522\" (UniqueName: \"kubernetes.io/projected/3d093a30-424c-4a0c-a749-7a47328c4b2d-kube-api-access-7m522\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.597086 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-combined-ca-bundle\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.597138 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-horizon-secret-key\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.597158 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d093a30-424c-4a0c-a749-7a47328c4b2d-config-data\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.597224 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d093a30-424c-4a0c-a749-7a47328c4b2d-logs\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.597250 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-horizon-tls-certs\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.597271 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d093a30-424c-4a0c-a749-7a47328c4b2d-scripts\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.597292 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m522\" (UniqueName: \"kubernetes.io/projected/3d093a30-424c-4a0c-a749-7a47328c4b2d-kube-api-access-7m522\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.598429 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d093a30-424c-4a0c-a749-7a47328c4b2d-logs\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.600788 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.601560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d093a30-424c-4a0c-a749-7a47328c4b2d-config-data\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.602132 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d093a30-424c-4a0c-a749-7a47328c4b2d-scripts\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.606617 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-horizon-secret-key\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.607470 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-combined-ca-bundle\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.610657 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d093a30-424c-4a0c-a749-7a47328c4b2d-horizon-tls-certs\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.629462 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m522\" (UniqueName: \"kubernetes.io/projected/3d093a30-424c-4a0c-a749-7a47328c4b2d-kube-api-access-7m522\") pod \"horizon-67d9579b5b-l9trm\" (UID: \"3d093a30-424c-4a0c-a749-7a47328c4b2d\") " pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:10 crc kubenswrapper[4771]: I0129 09:25:10.855527 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:11 crc kubenswrapper[4771]: I0129 09:25:11.328520 4771 generic.go:334] "Generic (PLEG): container finished" podID="d42c23a8-c6c5-4def-9c48-9d3320212293" containerID="210faa8939b1d3439762d9c1ade0884e7cddf35d1eccb277509739a4e9a09e15" exitCode=0 Jan 29 09:25:11 crc kubenswrapper[4771]: I0129 09:25:11.328587 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qd4ql" event={"ID":"d42c23a8-c6c5-4def-9c48-9d3320212293","Type":"ContainerDied","Data":"210faa8939b1d3439762d9c1ade0884e7cddf35d1eccb277509739a4e9a09e15"} Jan 29 09:25:11 crc kubenswrapper[4771]: I0129 09:25:11.598953 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:11 crc kubenswrapper[4771]: I0129 09:25:11.660619 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9hc6l"] Jan 29 09:25:11 crc kubenswrapper[4771]: I0129 09:25:11.660894 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" containerID="cri-o://c344d6f90b22c61b8e5c1a0e39a83f45b277933c0618a0431a31c5628c9a4a41" gracePeriod=10 Jan 29 09:25:12 crc kubenswrapper[4771]: I0129 09:25:12.343403 4771 generic.go:334] "Generic (PLEG): container finished" podID="2505a610-4ed2-406a-9215-7e8a23df996d" containerID="c344d6f90b22c61b8e5c1a0e39a83f45b277933c0618a0431a31c5628c9a4a41" exitCode=0 Jan 29 09:25:12 crc kubenswrapper[4771]: I0129 09:25:12.343475 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" event={"ID":"2505a610-4ed2-406a-9215-7e8a23df996d","Type":"ContainerDied","Data":"c344d6f90b22c61b8e5c1a0e39a83f45b277933c0618a0431a31c5628c9a4a41"} Jan 29 09:25:13 crc kubenswrapper[4771]: I0129 09:25:13.893194 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:25:13 crc kubenswrapper[4771]: I0129 09:25:13.905439 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ce7b26-bcc5-4306-ab2c-5691cceeb18f-etc-swift\") pod \"swift-storage-0\" (UID: \"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f\") " pod="openstack/swift-storage-0" Jan 29 09:25:13 crc kubenswrapper[4771]: I0129 09:25:13.963171 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 29 09:25:14 crc kubenswrapper[4771]: I0129 09:25:14.011517 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 09:25:16 crc kubenswrapper[4771]: W0129 09:25:16.539633 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e93a3a2_6818_4b61_a9d9_bba9e4927dd1.slice/crio-584f7858a9558d5be5ebea941992b9825e6ac2210f0b9f0aa277cc4f1b267b1f WatchSource:0}: Error finding container 584f7858a9558d5be5ebea941992b9825e6ac2210f0b9f0aa277cc4f1b267b1f: Status 404 returned error can't find the container with id 584f7858a9558d5be5ebea941992b9825e6ac2210f0b9f0aa277cc4f1b267b1f Jan 29 09:25:17 crc kubenswrapper[4771]: I0129 09:25:17.405260 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1","Type":"ContainerStarted","Data":"584f7858a9558d5be5ebea941992b9825e6ac2210f0b9f0aa277cc4f1b267b1f"} Jan 29 09:25:18 crc kubenswrapper[4771]: I0129 09:25:18.962291 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 29 09:25:23 crc kubenswrapper[4771]: I0129 09:25:23.962295 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 29 09:25:23 crc kubenswrapper[4771]: I0129 09:25:23.963022 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:24.994295 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:24.994454 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n587h5fbh587hb6hdh5h679h576h5h54dh59chb8h67fh5ffh9ch76h77h54h699h9fhdbh554h676h589h5bh655h594h59fhbfh597h5f9hf8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vd46f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f9f46d67c-jvqxf_openstack(b80061c9-e692-4bdd-8cf0-d9e9217d206d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:24.998903 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-f9f46d67c-jvqxf" podUID="b80061c9-e692-4bdd-8cf0-d9e9217d206d" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.026289 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.026717 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n684h85hf6h5f4h697h579h7bh5b8h66dhdh546h646h65dh5dbh77h65h6h686h54bhdbh684h64bh598h648h5d8h67bhbfh56dh95h4h54ch5dbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8gk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-568bc96dfc-zdb8c_openstack(aba80d5e-ee59-4305-974f-7645a09de37b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.029853 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-568bc96dfc-zdb8c" podUID="aba80d5e-ee59-4305-974f-7645a09de37b" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.036932 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.037135 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch99h688h55chchd6h56bhd8hdch557h547h5h559h65dh567hb4hcfh58h695h5fchf6h9ch6h8dh7dh658h56h6ch58bh56h697h6fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj5qw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66cc8c846f-2txw5_openstack(be50ed28-504a-4658-9e82-0e9d76c0a141): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.044454 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66cc8c846f-2txw5" podUID="be50ed28-504a-4658-9e82-0e9d76c0a141" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.101604 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.169643 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-credential-keys\") pod \"d42c23a8-c6c5-4def-9c48-9d3320212293\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.169789 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-fernet-keys\") pod \"d42c23a8-c6c5-4def-9c48-9d3320212293\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.169835 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-config-data\") pod \"d42c23a8-c6c5-4def-9c48-9d3320212293\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.169865 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-scripts\") pod \"d42c23a8-c6c5-4def-9c48-9d3320212293\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.169971 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dchmm\" (UniqueName: \"kubernetes.io/projected/d42c23a8-c6c5-4def-9c48-9d3320212293-kube-api-access-dchmm\") pod \"d42c23a8-c6c5-4def-9c48-9d3320212293\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.170028 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-combined-ca-bundle\") pod \"d42c23a8-c6c5-4def-9c48-9d3320212293\" (UID: \"d42c23a8-c6c5-4def-9c48-9d3320212293\") " Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.178380 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d42c23a8-c6c5-4def-9c48-9d3320212293" (UID: "d42c23a8-c6c5-4def-9c48-9d3320212293"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.195246 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-scripts" (OuterVolumeSpecName: "scripts") pod "d42c23a8-c6c5-4def-9c48-9d3320212293" (UID: "d42c23a8-c6c5-4def-9c48-9d3320212293"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.195639 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42c23a8-c6c5-4def-9c48-9d3320212293-kube-api-access-dchmm" (OuterVolumeSpecName: "kube-api-access-dchmm") pod "d42c23a8-c6c5-4def-9c48-9d3320212293" (UID: "d42c23a8-c6c5-4def-9c48-9d3320212293"). InnerVolumeSpecName "kube-api-access-dchmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.196887 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d42c23a8-c6c5-4def-9c48-9d3320212293" (UID: "d42c23a8-c6c5-4def-9c48-9d3320212293"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.209956 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-config-data" (OuterVolumeSpecName: "config-data") pod "d42c23a8-c6c5-4def-9c48-9d3320212293" (UID: "d42c23a8-c6c5-4def-9c48-9d3320212293"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.229877 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d42c23a8-c6c5-4def-9c48-9d3320212293" (UID: "d42c23a8-c6c5-4def-9c48-9d3320212293"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.271367 4771 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.271415 4771 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.271428 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.271442 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.271456 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dchmm\" (UniqueName: \"kubernetes.io/projected/d42c23a8-c6c5-4def-9c48-9d3320212293-kube-api-access-dchmm\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.271467 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c23a8-c6c5-4def-9c48-9d3320212293-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.648642 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.649150 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nv2pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-bvppf_openstack(015c0ccc-d729-4d0a-8168-b897f1c451da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.650730 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-bvppf" podUID="015c0ccc-d729-4d0a-8168-b897f1c451da" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.696785 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qd4ql" event={"ID":"d42c23a8-c6c5-4def-9c48-9d3320212293","Type":"ContainerDied","Data":"e43551a3e96ba4b31565ef9efb4456a413b1d929069c6286552839828e336a1f"} Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.696872 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43551a3e96ba4b31565ef9efb4456a413b1d929069c6286552839828e336a1f" Jan 29 09:25:25 crc kubenswrapper[4771]: I0129 09:25:25.697064 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qd4ql" Jan 29 09:25:25 crc kubenswrapper[4771]: E0129 09:25:25.699650 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-bvppf" podUID="015c0ccc-d729-4d0a-8168-b897f1c451da" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.192721 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qd4ql"] Jan 29 09:25:26 crc kubenswrapper[4771]: E0129 09:25:26.195113 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 29 09:25:26 crc kubenswrapper[4771]: E0129 09:25:26.195367 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599hdch84h77h9fh589h5ffh5cch54ch9bh5d9h5d7h76h6dh64h589h5c5h589h5d4hfch5ddh57bh59dhc9h69h68fh64ch596h5b6h67dh5bdh568q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfnpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.199481 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qd4ql"] Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.308415 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2mfct"] Jan 29 09:25:26 crc kubenswrapper[4771]: E0129 09:25:26.309028 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42c23a8-c6c5-4def-9c48-9d3320212293" containerName="keystone-bootstrap" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.309046 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42c23a8-c6c5-4def-9c48-9d3320212293" containerName="keystone-bootstrap" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.309365 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42c23a8-c6c5-4def-9c48-9d3320212293" containerName="keystone-bootstrap" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.310170 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.314010 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.314307 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.315948 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-497zd" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.320028 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2mfct"] Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.325969 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.326319 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.421346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-fernet-keys\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.421413 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-config-data\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.421453 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-scripts\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.421488 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46txf\" (UniqueName: \"kubernetes.io/projected/8a9e93f7-bada-4141-887c-4174d899b95e-kube-api-access-46txf\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.421680 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-combined-ca-bundle\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.421778 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-credential-keys\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.525893 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-fernet-keys\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.525942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-config-data\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.525960 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-scripts\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.525985 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46txf\" (UniqueName: \"kubernetes.io/projected/8a9e93f7-bada-4141-887c-4174d899b95e-kube-api-access-46txf\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.526040 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-combined-ca-bundle\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.526068 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-credential-keys\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.572664 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-scripts\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.575160 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-credential-keys\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.582167 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-combined-ca-bundle\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.583474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-config-data\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.590477 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-fernet-keys\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.615466 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46txf\" (UniqueName: \"kubernetes.io/projected/8a9e93f7-bada-4141-887c-4174d899b95e-kube-api-access-46txf\") pod \"keystone-bootstrap-2mfct\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.651595 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:26 crc kubenswrapper[4771]: I0129 09:25:26.850269 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42c23a8-c6c5-4def-9c48-9d3320212293" path="/var/lib/kubelet/pods/d42c23a8-c6c5-4def-9c48-9d3320212293/volumes" Jan 29 09:25:33 crc kubenswrapper[4771]: I0129 09:25:33.961829 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.806620 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" event={"ID":"2505a610-4ed2-406a-9215-7e8a23df996d","Type":"ContainerDied","Data":"f79d5777329718ef20dc33c62cb2bacb4bd346bd0dbc60cc3a0e7270c47aadfb"} Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.807582 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79d5777329718ef20dc33c62cb2bacb4bd346bd0dbc60cc3a0e7270c47aadfb" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.808392 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9f46d67c-jvqxf" event={"ID":"b80061c9-e692-4bdd-8cf0-d9e9217d206d","Type":"ContainerDied","Data":"454de75d92ae4da8ab5e11e788fe61ad492c43f2e07c7db425845c6657abef25"} Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.808442 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="454de75d92ae4da8ab5e11e788fe61ad492c43f2e07c7db425845c6657abef25" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.811667 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-568bc96dfc-zdb8c" event={"ID":"aba80d5e-ee59-4305-974f-7645a09de37b","Type":"ContainerDied","Data":"4279a017ea2785166eee2af636a5291b645752bbe9a6d0325307cb608002dc62"} Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.811738 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4279a017ea2785166eee2af636a5291b645752bbe9a6d0325307cb608002dc62" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.813261 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66cc8c846f-2txw5" event={"ID":"be50ed28-504a-4658-9e82-0e9d76c0a141","Type":"ContainerDied","Data":"78f40a8b2bec5680ecf08f7dfa05e81a3394d54e78455c6482a3f4d1a03409de"} Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.813310 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f40a8b2bec5680ecf08f7dfa05e81a3394d54e78455c6482a3f4d1a03409de" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.867121 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.874863 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.880437 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-config-data\") pod \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.880571 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b80061c9-e692-4bdd-8cf0-d9e9217d206d-horizon-secret-key\") pod \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.880602 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd46f\" (UniqueName: \"kubernetes.io/projected/b80061c9-e692-4bdd-8cf0-d9e9217d206d-kube-api-access-vd46f\") pod \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.880761 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-scripts\") pod \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.880827 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b80061c9-e692-4bdd-8cf0-d9e9217d206d-logs\") pod \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\" (UID: \"b80061c9-e692-4bdd-8cf0-d9e9217d206d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.882230 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-config-data" (OuterVolumeSpecName: "config-data") pod "b80061c9-e692-4bdd-8cf0-d9e9217d206d" (UID: "b80061c9-e692-4bdd-8cf0-d9e9217d206d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.882887 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80061c9-e692-4bdd-8cf0-d9e9217d206d-logs" (OuterVolumeSpecName: "logs") pod "b80061c9-e692-4bdd-8cf0-d9e9217d206d" (UID: "b80061c9-e692-4bdd-8cf0-d9e9217d206d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.883354 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-scripts" (OuterVolumeSpecName: "scripts") pod "b80061c9-e692-4bdd-8cf0-d9e9217d206d" (UID: "b80061c9-e692-4bdd-8cf0-d9e9217d206d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.889153 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80061c9-e692-4bdd-8cf0-d9e9217d206d-kube-api-access-vd46f" (OuterVolumeSpecName: "kube-api-access-vd46f") pod "b80061c9-e692-4bdd-8cf0-d9e9217d206d" (UID: "b80061c9-e692-4bdd-8cf0-d9e9217d206d"). InnerVolumeSpecName "kube-api-access-vd46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.891527 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80061c9-e692-4bdd-8cf0-d9e9217d206d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b80061c9-e692-4bdd-8cf0-d9e9217d206d" (UID: "b80061c9-e692-4bdd-8cf0-d9e9217d206d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.892116 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.981191 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.982457 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-scripts\") pod \"aba80d5e-ee59-4305-974f-7645a09de37b\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.982500 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba80d5e-ee59-4305-974f-7645a09de37b-logs\") pod \"aba80d5e-ee59-4305-974f-7645a09de37b\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.982535 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49fhb\" (UniqueName: \"kubernetes.io/projected/2505a610-4ed2-406a-9215-7e8a23df996d-kube-api-access-49fhb\") pod \"2505a610-4ed2-406a-9215-7e8a23df996d\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.982613 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-nb\") pod \"2505a610-4ed2-406a-9215-7e8a23df996d\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.983627 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8gk4\" (UniqueName: \"kubernetes.io/projected/aba80d5e-ee59-4305-974f-7645a09de37b-kube-api-access-s8gk4\") pod \"aba80d5e-ee59-4305-974f-7645a09de37b\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.983675 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba80d5e-ee59-4305-974f-7645a09de37b-horizon-secret-key\") pod \"aba80d5e-ee59-4305-974f-7645a09de37b\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.983762 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-config\") pod \"2505a610-4ed2-406a-9215-7e8a23df996d\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.983827 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-sb\") pod \"2505a610-4ed2-406a-9215-7e8a23df996d\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.983854 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-config-data\") pod \"aba80d5e-ee59-4305-974f-7645a09de37b\" (UID: \"aba80d5e-ee59-4305-974f-7645a09de37b\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.983939 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-dns-svc\") pod \"2505a610-4ed2-406a-9215-7e8a23df996d\" (UID: \"2505a610-4ed2-406a-9215-7e8a23df996d\") " Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.984457 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b80061c9-e692-4bdd-8cf0-d9e9217d206d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.984475 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd46f\" (UniqueName: \"kubernetes.io/projected/b80061c9-e692-4bdd-8cf0-d9e9217d206d-kube-api-access-vd46f\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.984488 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.984497 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b80061c9-e692-4bdd-8cf0-d9e9217d206d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.984505 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b80061c9-e692-4bdd-8cf0-d9e9217d206d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.986310 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-scripts" (OuterVolumeSpecName: "scripts") pod "aba80d5e-ee59-4305-974f-7645a09de37b" (UID: "aba80d5e-ee59-4305-974f-7645a09de37b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.986562 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aba80d5e-ee59-4305-974f-7645a09de37b-logs" (OuterVolumeSpecName: "logs") pod "aba80d5e-ee59-4305-974f-7645a09de37b" (UID: "aba80d5e-ee59-4305-974f-7645a09de37b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.987048 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-config-data" (OuterVolumeSpecName: "config-data") pod "aba80d5e-ee59-4305-974f-7645a09de37b" (UID: "aba80d5e-ee59-4305-974f-7645a09de37b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:36 crc kubenswrapper[4771]: I0129 09:25:36.991997 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba80d5e-ee59-4305-974f-7645a09de37b-kube-api-access-s8gk4" (OuterVolumeSpecName: "kube-api-access-s8gk4") pod "aba80d5e-ee59-4305-974f-7645a09de37b" (UID: "aba80d5e-ee59-4305-974f-7645a09de37b"). InnerVolumeSpecName "kube-api-access-s8gk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.003338 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba80d5e-ee59-4305-974f-7645a09de37b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aba80d5e-ee59-4305-974f-7645a09de37b" (UID: "aba80d5e-ee59-4305-974f-7645a09de37b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.003373 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2505a610-4ed2-406a-9215-7e8a23df996d-kube-api-access-49fhb" (OuterVolumeSpecName: "kube-api-access-49fhb") pod "2505a610-4ed2-406a-9215-7e8a23df996d" (UID: "2505a610-4ed2-406a-9215-7e8a23df996d"). InnerVolumeSpecName "kube-api-access-49fhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.046789 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2505a610-4ed2-406a-9215-7e8a23df996d" (UID: "2505a610-4ed2-406a-9215-7e8a23df996d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.052559 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2505a610-4ed2-406a-9215-7e8a23df996d" (UID: "2505a610-4ed2-406a-9215-7e8a23df996d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.064300 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2505a610-4ed2-406a-9215-7e8a23df996d" (UID: "2505a610-4ed2-406a-9215-7e8a23df996d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.064904 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-config" (OuterVolumeSpecName: "config") pod "2505a610-4ed2-406a-9215-7e8a23df996d" (UID: "2505a610-4ed2-406a-9215-7e8a23df996d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.085420 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-config-data\") pod \"be50ed28-504a-4658-9e82-0e9d76c0a141\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.085526 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-scripts\") pod \"be50ed28-504a-4658-9e82-0e9d76c0a141\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.085577 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be50ed28-504a-4658-9e82-0e9d76c0a141-horizon-secret-key\") pod \"be50ed28-504a-4658-9e82-0e9d76c0a141\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.085641 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj5qw\" (UniqueName: \"kubernetes.io/projected/be50ed28-504a-4658-9e82-0e9d76c0a141-kube-api-access-vj5qw\") pod \"be50ed28-504a-4658-9e82-0e9d76c0a141\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.085877 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be50ed28-504a-4658-9e82-0e9d76c0a141-logs\") pod \"be50ed28-504a-4658-9e82-0e9d76c0a141\" (UID: \"be50ed28-504a-4658-9e82-0e9d76c0a141\") " Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.086307 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be50ed28-504a-4658-9e82-0e9d76c0a141-logs" (OuterVolumeSpecName: "logs") pod "be50ed28-504a-4658-9e82-0e9d76c0a141" (UID: "be50ed28-504a-4658-9e82-0e9d76c0a141"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.086427 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-scripts" (OuterVolumeSpecName: "scripts") pod "be50ed28-504a-4658-9e82-0e9d76c0a141" (UID: "be50ed28-504a-4658-9e82-0e9d76c0a141"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.086771 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49fhb\" (UniqueName: \"kubernetes.io/projected/2505a610-4ed2-406a-9215-7e8a23df996d-kube-api-access-49fhb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.087576 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.087661 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.087687 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8gk4\" (UniqueName: \"kubernetes.io/projected/aba80d5e-ee59-4305-974f-7645a09de37b-kube-api-access-s8gk4\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.087840 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aba80d5e-ee59-4305-974f-7645a09de37b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.087853 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.087858 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-config-data" (OuterVolumeSpecName: "config-data") pod "be50ed28-504a-4658-9e82-0e9d76c0a141" (UID: "be50ed28-504a-4658-9e82-0e9d76c0a141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.087868 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.088135 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.088185 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2505a610-4ed2-406a-9215-7e8a23df996d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.088204 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be50ed28-504a-4658-9e82-0e9d76c0a141-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.088215 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aba80d5e-ee59-4305-974f-7645a09de37b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.088226 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba80d5e-ee59-4305-974f-7645a09de37b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.089983 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be50ed28-504a-4658-9e82-0e9d76c0a141-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "be50ed28-504a-4658-9e82-0e9d76c0a141" (UID: "be50ed28-504a-4658-9e82-0e9d76c0a141"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.090019 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be50ed28-504a-4658-9e82-0e9d76c0a141-kube-api-access-vj5qw" (OuterVolumeSpecName: "kube-api-access-vj5qw") pod "be50ed28-504a-4658-9e82-0e9d76c0a141" (UID: "be50ed28-504a-4658-9e82-0e9d76c0a141"). InnerVolumeSpecName "kube-api-access-vj5qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.190586 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be50ed28-504a-4658-9e82-0e9d76c0a141-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.190630 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be50ed28-504a-4658-9e82-0e9d76c0a141-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.190654 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj5qw\" (UniqueName: \"kubernetes.io/projected/be50ed28-504a-4658-9e82-0e9d76c0a141-kube-api-access-vj5qw\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.821796 4771 generic.go:334] "Generic (PLEG): container finished" podID="5de5b9ec-6c6b-4e51-a053-d0076c2c729e" containerID="02113350a64fb52a1a50a806562328d66a7f36ada03fabf6d1498be3bd7e7929" exitCode=0 Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.821874 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-568bc96dfc-zdb8c" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.823143 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9f46d67c-jvqxf" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.823169 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66cc8c846f-2txw5" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.823182 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.823211 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p2ghj" event={"ID":"5de5b9ec-6c6b-4e51-a053-d0076c2c729e","Type":"ContainerDied","Data":"02113350a64fb52a1a50a806562328d66a7f36ada03fabf6d1498be3bd7e7929"} Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.887125 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9hc6l"] Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.910039 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9hc6l"] Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.925433 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f9f46d67c-jvqxf"] Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.942855 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f9f46d67c-jvqxf"] Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.961857 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66cc8c846f-2txw5"] Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.973371 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66cc8c846f-2txw5"] Jan 29 09:25:37 crc kubenswrapper[4771]: I0129 09:25:37.988921 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-568bc96dfc-zdb8c"] Jan 29 09:25:38 crc kubenswrapper[4771]: I0129 09:25:38.000170 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-568bc96dfc-zdb8c"] Jan 29 09:25:38 crc kubenswrapper[4771]: E0129 09:25:38.215311 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 09:25:38 crc kubenswrapper[4771]: E0129 09:25:38.215890 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppsn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-g2fcd_openstack(29ab9c1d-3798-4151-bf0b-63227f0e45a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:25:38 crc kubenswrapper[4771]: E0129 09:25:38.217072 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-g2fcd" podUID="29ab9c1d-3798-4151-bf0b-63227f0e45a4" Jan 29 09:25:38 crc kubenswrapper[4771]: E0129 09:25:38.866845 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-g2fcd" podUID="29ab9c1d-3798-4151-bf0b-63227f0e45a4" Jan 29 09:25:38 crc kubenswrapper[4771]: I0129 09:25:38.882273 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" path="/var/lib/kubelet/pods/2505a610-4ed2-406a-9215-7e8a23df996d/volumes" Jan 29 09:25:38 crc kubenswrapper[4771]: I0129 09:25:38.883189 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba80d5e-ee59-4305-974f-7645a09de37b" path="/var/lib/kubelet/pods/aba80d5e-ee59-4305-974f-7645a09de37b/volumes" Jan 29 09:25:38 crc kubenswrapper[4771]: I0129 09:25:38.883623 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b80061c9-e692-4bdd-8cf0-d9e9217d206d" path="/var/lib/kubelet/pods/b80061c9-e692-4bdd-8cf0-d9e9217d206d/volumes" Jan 29 09:25:38 crc kubenswrapper[4771]: I0129 09:25:38.884169 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be50ed28-504a-4658-9e82-0e9d76c0a141" path="/var/lib/kubelet/pods/be50ed28-504a-4658-9e82-0e9d76c0a141/volumes" Jan 29 09:25:38 crc kubenswrapper[4771]: I0129 09:25:38.964388 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-9hc6l" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Jan 29 09:25:38 crc kubenswrapper[4771]: I0129 09:25:38.965248 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67d9579b5b-l9trm"] Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.103158 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d5dc7fbb8-8h9gn"] Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.202639 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 09:25:39 crc kubenswrapper[4771]: W0129 09:25:39.235215 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ce7b26_bcc5_4306_ab2c_5691cceeb18f.slice/crio-9c534084dc71849d23883b28265ec27841ce79e5be140e0d7abc80d34ffc6396 WatchSource:0}: Error finding container 9c534084dc71849d23883b28265ec27841ce79e5be140e0d7abc80d34ffc6396: Status 404 returned error can't find the container with id 9c534084dc71849d23883b28265ec27841ce79e5be140e0d7abc80d34ffc6396 Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.281777 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2mfct"] Jan 29 09:25:39 crc kubenswrapper[4771]: W0129 09:25:39.303365 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a9e93f7_bada_4141_887c_4174d899b95e.slice/crio-d7e17fe212a7dc4154988eb78647f7bf26f1fbd5433fe12fa0eed3a95d69832a WatchSource:0}: Error finding container d7e17fe212a7dc4154988eb78647f7bf26f1fbd5433fe12fa0eed3a95d69832a: Status 404 returned error can't find the container with id d7e17fe212a7dc4154988eb78647f7bf26f1fbd5433fe12fa0eed3a95d69832a Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.321148 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.349841 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64nq2\" (UniqueName: \"kubernetes.io/projected/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-kube-api-access-64nq2\") pod \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.349939 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-combined-ca-bundle\") pod \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.350049 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-config\") pod \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\" (UID: \"5de5b9ec-6c6b-4e51-a053-d0076c2c729e\") " Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.376055 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-kube-api-access-64nq2" (OuterVolumeSpecName: "kube-api-access-64nq2") pod "5de5b9ec-6c6b-4e51-a053-d0076c2c729e" (UID: "5de5b9ec-6c6b-4e51-a053-d0076c2c729e"). InnerVolumeSpecName "kube-api-access-64nq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.382439 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-config" (OuterVolumeSpecName: "config") pod "5de5b9ec-6c6b-4e51-a053-d0076c2c729e" (UID: "5de5b9ec-6c6b-4e51-a053-d0076c2c729e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.404220 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5de5b9ec-6c6b-4e51-a053-d0076c2c729e" (UID: "5de5b9ec-6c6b-4e51-a053-d0076c2c729e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.452459 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.452494 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64nq2\" (UniqueName: \"kubernetes.io/projected/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-kube-api-access-64nq2\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.452506 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de5b9ec-6c6b-4e51-a053-d0076c2c729e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.916326 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5dc7fbb8-8h9gn" event={"ID":"55cedb34-7c52-47ec-8f60-5d3e362f5948","Type":"ContainerStarted","Data":"c5950547eca69982dd0e9e9814cffb226399c8a4a328d5527b65ecfa56a30944"} Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.984051 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bvppf" event={"ID":"015c0ccc-d729-4d0a-8168-b897f1c451da","Type":"ContainerStarted","Data":"91be09616b3fb43ae30e99014c8a1b5f10e78679d0a55fa87b96ff86e709a2ef"} Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.993356 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"781b98c1-28a5-455a-a1d3-4bd92e1692fc","Type":"ContainerStarted","Data":"234a8f32d7d8c8d46847d6efcd962a15ec9e37cb5dc6b492b04faf8f820fbae0"} Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.993524 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerName="glance-log" containerID="cri-o://3464b853804f9d653f2579bed2e2c7dfda358fe4dc70c2f28df3b0e0e129e257" gracePeriod=30 Jan 29 09:25:39 crc kubenswrapper[4771]: I0129 09:25:39.993978 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerName="glance-httpd" containerID="cri-o://234a8f32d7d8c8d46847d6efcd962a15ec9e37cb5dc6b492b04faf8f820fbae0" gracePeriod=30 Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.036658 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2mfct" event={"ID":"8a9e93f7-bada-4141-887c-4174d899b95e","Type":"ContainerStarted","Data":"c7f0a480a6332a49cb2d4e45193ef1fed5cfded327bae127694ebc276bd37041"} Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.036718 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2mfct" event={"ID":"8a9e93f7-bada-4141-887c-4174d899b95e","Type":"ContainerStarted","Data":"d7e17fe212a7dc4154988eb78647f7bf26f1fbd5433fe12fa0eed3a95d69832a"} Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.042038 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-m957m"] Jan 29 09:25:40 crc kubenswrapper[4771]: E0129 09:25:40.042484 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="init" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.042503 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="init" Jan 29 09:25:40 crc kubenswrapper[4771]: E0129 09:25:40.042534 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de5b9ec-6c6b-4e51-a053-d0076c2c729e" containerName="neutron-db-sync" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.042541 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de5b9ec-6c6b-4e51-a053-d0076c2c729e" containerName="neutron-db-sync" Jan 29 09:25:40 crc kubenswrapper[4771]: E0129 09:25:40.042560 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.042566 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.042747 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2505a610-4ed2-406a-9215-7e8a23df996d" containerName="dnsmasq-dns" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.042766 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de5b9ec-6c6b-4e51-a053-d0076c2c729e" containerName="neutron-db-sync" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.043779 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.062247 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-m957m"] Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.064219 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p2ghj" event={"ID":"5de5b9ec-6c6b-4e51-a053-d0076c2c729e","Type":"ContainerDied","Data":"55501460cf0282477722afa8e500e0d6d7b62e6e50b5248a0e1e2c83b00aebc4"} Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.064272 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55501460cf0282477722afa8e500e0d6d7b62e6e50b5248a0e1e2c83b00aebc4" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.064361 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p2ghj" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.077534 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.077840 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-dns-svc\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.077881 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.077969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-config\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.078016 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2gz7\" (UniqueName: \"kubernetes.io/projected/bcbc418c-ff34-4040-bfc9-2c8111568cd0-kube-api-access-w2gz7\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.108284 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bvppf" podStartSLOduration=3.653643926 podStartE2EDuration="40.108248295s" podCreationTimestamp="2026-01-29 09:25:00 +0000 UTC" firstStartedPulling="2026-01-29 09:25:02.246319642 +0000 UTC m=+1122.369159869" lastFinishedPulling="2026-01-29 09:25:38.700924011 +0000 UTC m=+1158.823764238" observedRunningTime="2026-01-29 09:25:40.028179907 +0000 UTC m=+1160.151020134" watchObservedRunningTime="2026-01-29 09:25:40.108248295 +0000 UTC m=+1160.231088532" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.110163 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8v8h9" event={"ID":"883872f0-cb88-4095-b918-b971d8c3c0b6","Type":"ContainerStarted","Data":"79076e4a83c2a714ab820355ea704fbdde238eb57135e539a9aa3f931719370a"} Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.120330 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"9c534084dc71849d23883b28265ec27841ce79e5be140e0d7abc80d34ffc6396"} Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.137647 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d9579b5b-l9trm" event={"ID":"3d093a30-424c-4a0c-a749-7a47328c4b2d","Type":"ContainerStarted","Data":"0f3c0cdc61178d182b216467ccaebb76a2533a351acf3a3b7b0455010899e787"} Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.154814 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=33.154793107 podStartE2EDuration="33.154793107s" podCreationTimestamp="2026-01-29 09:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:40.138161672 +0000 UTC m=+1160.261001899" watchObservedRunningTime="2026-01-29 09:25:40.154793107 +0000 UTC m=+1160.277633324" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.216832 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe","Type":"ContainerStarted","Data":"73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07"} Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.224387 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-dns-svc\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.224478 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.224624 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-config\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.224734 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2gz7\" (UniqueName: \"kubernetes.io/projected/bcbc418c-ff34-4040-bfc9-2c8111568cd0-kube-api-access-w2gz7\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.225014 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.226238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.229151 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.229852 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-dns-svc\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.234831 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1","Type":"ContainerStarted","Data":"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474"} Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.285018 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-config\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.314238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2gz7\" (UniqueName: \"kubernetes.io/projected/bcbc418c-ff34-4040-bfc9-2c8111568cd0-kube-api-access-w2gz7\") pod \"dnsmasq-dns-b6c948c7-m957m\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.405144 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.438776 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b7b97bcf6-xn2wg"] Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.440708 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.459345 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-26mfg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.459495 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.459613 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.459749 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.494690 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b7b97bcf6-xn2wg"] Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.512607 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2mfct" podStartSLOduration=14.512561223 podStartE2EDuration="14.512561223s" podCreationTimestamp="2026-01-29 09:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:40.412812337 +0000 UTC m=+1160.535652564" watchObservedRunningTime="2026-01-29 09:25:40.512561223 +0000 UTC m=+1160.635401450" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.520930 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8v8h9" podStartSLOduration=6.159339255 podStartE2EDuration="40.520910981s" podCreationTimestamp="2026-01-29 09:25:00 +0000 UTC" firstStartedPulling="2026-01-29 09:25:02.394681356 +0000 UTC m=+1122.517521583" lastFinishedPulling="2026-01-29 09:25:36.756253082 +0000 UTC m=+1156.879093309" observedRunningTime="2026-01-29 09:25:40.494284533 +0000 UTC m=+1160.617124770" watchObservedRunningTime="2026-01-29 09:25:40.520910981 +0000 UTC m=+1160.643751198" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.556266 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-combined-ca-bundle\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.556326 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-httpd-config\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.556382 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-ovndb-tls-certs\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.556514 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x785r\" (UniqueName: \"kubernetes.io/projected/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-kube-api-access-x785r\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.556825 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-config\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.658550 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-ovndb-tls-certs\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.663281 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x785r\" (UniqueName: \"kubernetes.io/projected/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-kube-api-access-x785r\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.664088 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-config\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.664288 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-combined-ca-bundle\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.664562 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-httpd-config\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.666012 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-ovndb-tls-certs\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.670056 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-config\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.679422 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-httpd-config\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.679624 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-combined-ca-bundle\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.688001 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x785r\" (UniqueName: \"kubernetes.io/projected/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-kube-api-access-x785r\") pod \"neutron-b7b97bcf6-xn2wg\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:40 crc kubenswrapper[4771]: I0129 09:25:40.863376 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.193928 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-m957m"] Jan 29 09:25:41 crc kubenswrapper[4771]: W0129 09:25:41.222627 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcbc418c_ff34_4040_bfc9_2c8111568cd0.slice/crio-1853ff60bb913a1fa21d2f7bd580aff674fba807ac45b392c296ec8064e22dbf WatchSource:0}: Error finding container 1853ff60bb913a1fa21d2f7bd580aff674fba807ac45b392c296ec8064e22dbf: Status 404 returned error can't find the container with id 1853ff60bb913a1fa21d2f7bd580aff674fba807ac45b392c296ec8064e22dbf Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.280776 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d9579b5b-l9trm" event={"ID":"3d093a30-424c-4a0c-a749-7a47328c4b2d","Type":"ContainerStarted","Data":"d96442c94e9a69858b8c8764bc8955966edbf814109f7a3ca76ba223257f2ea5"} Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.308183 4771 generic.go:334] "Generic (PLEG): container finished" podID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerID="234a8f32d7d8c8d46847d6efcd962a15ec9e37cb5dc6b492b04faf8f820fbae0" exitCode=0 Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.308240 4771 generic.go:334] "Generic (PLEG): container finished" podID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerID="3464b853804f9d653f2579bed2e2c7dfda358fe4dc70c2f28df3b0e0e129e257" exitCode=143 Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.308344 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"781b98c1-28a5-455a-a1d3-4bd92e1692fc","Type":"ContainerDied","Data":"234a8f32d7d8c8d46847d6efcd962a15ec9e37cb5dc6b492b04faf8f820fbae0"} Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.308375 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"781b98c1-28a5-455a-a1d3-4bd92e1692fc","Type":"ContainerDied","Data":"3464b853804f9d653f2579bed2e2c7dfda358fe4dc70c2f28df3b0e0e129e257"} Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.345506 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1","Type":"ContainerStarted","Data":"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f"} Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.346015 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerName="glance-log" containerID="cri-o://95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474" gracePeriod=30 Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.346938 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerName="glance-httpd" containerID="cri-o://2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f" gracePeriod=30 Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.378244 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5dc7fbb8-8h9gn" event={"ID":"55cedb34-7c52-47ec-8f60-5d3e362f5948","Type":"ContainerStarted","Data":"f44e7228793b3cf23069b72b399157c4fe605e6acbd5233674966340d89bd2a5"} Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.401746 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.424775 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=34.424750228 podStartE2EDuration="34.424750228s" podCreationTimestamp="2026-01-29 09:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:41.389744982 +0000 UTC m=+1161.512585229" watchObservedRunningTime="2026-01-29 09:25:41.424750228 +0000 UTC m=+1161.547590455" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.448187 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d5dc7fbb8-8h9gn" podStartSLOduration=30.907989408 podStartE2EDuration="31.448162448s" podCreationTimestamp="2026-01-29 09:25:10 +0000 UTC" firstStartedPulling="2026-01-29 09:25:39.13349883 +0000 UTC m=+1159.256339057" lastFinishedPulling="2026-01-29 09:25:39.67367186 +0000 UTC m=+1159.796512097" observedRunningTime="2026-01-29 09:25:41.430332881 +0000 UTC m=+1161.553173108" watchObservedRunningTime="2026-01-29 09:25:41.448162448 +0000 UTC m=+1161.571002665" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.494620 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.494753 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-logs\") pod \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.494878 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-config-data\") pod \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.494948 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-scripts\") pod \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.494987 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-combined-ca-bundle\") pod \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.495054 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-httpd-run\") pod \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.495107 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnljk\" (UniqueName: \"kubernetes.io/projected/781b98c1-28a5-455a-a1d3-4bd92e1692fc-kube-api-access-hnljk\") pod \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\" (UID: \"781b98c1-28a5-455a-a1d3-4bd92e1692fc\") " Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.497128 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "781b98c1-28a5-455a-a1d3-4bd92e1692fc" (UID: "781b98c1-28a5-455a-a1d3-4bd92e1692fc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.497374 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-logs" (OuterVolumeSpecName: "logs") pod "781b98c1-28a5-455a-a1d3-4bd92e1692fc" (UID: "781b98c1-28a5-455a-a1d3-4bd92e1692fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.505438 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "781b98c1-28a5-455a-a1d3-4bd92e1692fc" (UID: "781b98c1-28a5-455a-a1d3-4bd92e1692fc"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.512028 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-scripts" (OuterVolumeSpecName: "scripts") pod "781b98c1-28a5-455a-a1d3-4bd92e1692fc" (UID: "781b98c1-28a5-455a-a1d3-4bd92e1692fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.529926 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781b98c1-28a5-455a-a1d3-4bd92e1692fc-kube-api-access-hnljk" (OuterVolumeSpecName: "kube-api-access-hnljk") pod "781b98c1-28a5-455a-a1d3-4bd92e1692fc" (UID: "781b98c1-28a5-455a-a1d3-4bd92e1692fc"). InnerVolumeSpecName "kube-api-access-hnljk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.565901 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "781b98c1-28a5-455a-a1d3-4bd92e1692fc" (UID: "781b98c1-28a5-455a-a1d3-4bd92e1692fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.598666 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.598715 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.598877 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.598895 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnljk\" (UniqueName: \"kubernetes.io/projected/781b98c1-28a5-455a-a1d3-4bd92e1692fc-kube-api-access-hnljk\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.599012 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-config-data" (OuterVolumeSpecName: "config-data") pod "781b98c1-28a5-455a-a1d3-4bd92e1692fc" (UID: "781b98c1-28a5-455a-a1d3-4bd92e1692fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.599038 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.599100 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/781b98c1-28a5-455a-a1d3-4bd92e1692fc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.650341 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.703838 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:41 crc kubenswrapper[4771]: I0129 09:25:41.703865 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781b98c1-28a5-455a-a1d3-4bd92e1692fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.134983 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.221766 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-config-data\") pod \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.221948 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-combined-ca-bundle\") pod \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.221973 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrgs4\" (UniqueName: \"kubernetes.io/projected/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-kube-api-access-qrgs4\") pod \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.222103 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.222171 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-logs\") pod \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.222193 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-httpd-run\") pod \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.222226 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-scripts\") pod \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\" (UID: \"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1\") " Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.223905 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-logs" (OuterVolumeSpecName: "logs") pod "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" (UID: "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.225359 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" (UID: "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.236247 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-scripts" (OuterVolumeSpecName: "scripts") pod "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" (UID: "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.240271 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-kube-api-access-qrgs4" (OuterVolumeSpecName: "kube-api-access-qrgs4") pod "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" (UID: "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1"). InnerVolumeSpecName "kube-api-access-qrgs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.258655 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" (UID: "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.286248 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" (UID: "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.323972 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.324010 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.324018 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.324027 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.324036 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.324044 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrgs4\" (UniqueName: \"kubernetes.io/projected/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-kube-api-access-qrgs4\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.354069 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.375622 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-config-data" (OuterVolumeSpecName: "config-data") pod "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" (UID: "9e93a3a2-6818-4b61-a9d9-bba9e4927dd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.408151 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b7b97bcf6-xn2wg"] Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.417051 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5dc7fbb8-8h9gn" event={"ID":"55cedb34-7c52-47ec-8f60-5d3e362f5948","Type":"ContainerStarted","Data":"6a6bb5208ac980078fcfd13dcef8211355367718988a765859e0cf35d85c777b"} Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.426026 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.426061 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.426825 4771 generic.go:334] "Generic (PLEG): container finished" podID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerID="a183dec526fb5112857524996951924548b155452a711f8e06151b0aaad6fd77" exitCode=0 Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.426884 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-m957m" event={"ID":"bcbc418c-ff34-4040-bfc9-2c8111568cd0","Type":"ContainerDied","Data":"a183dec526fb5112857524996951924548b155452a711f8e06151b0aaad6fd77"} Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.426921 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-m957m" event={"ID":"bcbc418c-ff34-4040-bfc9-2c8111568cd0","Type":"ContainerStarted","Data":"1853ff60bb913a1fa21d2f7bd580aff674fba807ac45b392c296ec8064e22dbf"} Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.454489 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d9579b5b-l9trm" event={"ID":"3d093a30-424c-4a0c-a749-7a47328c4b2d","Type":"ContainerStarted","Data":"3e3eb0bab65b702bb1c067606e0249eadf9c7d613d86dd1d0a6c61528ba39a88"} Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.475525 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"781b98c1-28a5-455a-a1d3-4bd92e1692fc","Type":"ContainerDied","Data":"1feedc6b892f385bd6d399429d63f7a7d43c3efbd49b856ab27a81bfc6feb7a7"} Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.475585 4771 scope.go:117] "RemoveContainer" containerID="234a8f32d7d8c8d46847d6efcd962a15ec9e37cb5dc6b492b04faf8f820fbae0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.475747 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.483347 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67d9579b5b-l9trm" podStartSLOduration=31.81490441 podStartE2EDuration="32.483324044s" podCreationTimestamp="2026-01-29 09:25:10 +0000 UTC" firstStartedPulling="2026-01-29 09:25:39.006831489 +0000 UTC m=+1159.129671716" lastFinishedPulling="2026-01-29 09:25:39.675251123 +0000 UTC m=+1159.798091350" observedRunningTime="2026-01-29 09:25:42.478886462 +0000 UTC m=+1162.601726699" watchObservedRunningTime="2026-01-29 09:25:42.483324044 +0000 UTC m=+1162.606164271" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.489068 4771 generic.go:334] "Generic (PLEG): container finished" podID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerID="2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f" exitCode=0 Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.489103 4771 generic.go:334] "Generic (PLEG): container finished" podID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerID="95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474" exitCode=143 Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.489135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1","Type":"ContainerDied","Data":"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f"} Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.489169 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1","Type":"ContainerDied","Data":"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474"} Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.489184 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e93a3a2-6818-4b61-a9d9-bba9e4927dd1","Type":"ContainerDied","Data":"584f7858a9558d5be5ebea941992b9825e6ac2210f0b9f0aa277cc4f1b267b1f"} Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.489249 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.557773 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.608122 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.628785 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.649343 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.666640 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:42 crc kubenswrapper[4771]: E0129 09:25:42.667340 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerName="glance-httpd" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.667360 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerName="glance-httpd" Jan 29 09:25:42 crc kubenswrapper[4771]: E0129 09:25:42.667398 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerName="glance-log" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.667409 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerName="glance-log" Jan 29 09:25:42 crc kubenswrapper[4771]: E0129 09:25:42.667426 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerName="glance-log" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.667434 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerName="glance-log" Jan 29 09:25:42 crc kubenswrapper[4771]: E0129 09:25:42.667455 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerName="glance-httpd" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.667462 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerName="glance-httpd" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.667685 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerName="glance-httpd" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.667791 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerName="glance-httpd" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.667807 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" containerName="glance-log" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.667818 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" containerName="glance-log" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.668969 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.692035 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.692532 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.693180 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x86l6" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.694878 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.713081 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.714893 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.719609 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.719913 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.740782 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.741952 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.741980 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.742008 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.742034 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.742053 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.742082 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.742102 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6hwb\" (UniqueName: \"kubernetes.io/projected/e4991753-f456-4f5d-8a34-6f440f82ad8f-kube-api-access-q6hwb\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.742168 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.761465 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.844374 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.844701 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.844728 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.844766 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.844792 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.844811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.844845 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.844867 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6hwb\" (UniqueName: \"kubernetes.io/projected/e4991753-f456-4f5d-8a34-6f440f82ad8f-kube-api-access-q6hwb\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.845474 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.859275 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.859620 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.861225 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.864552 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.870723 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.871593 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781b98c1-28a5-455a-a1d3-4bd92e1692fc" path="/var/lib/kubelet/pods/781b98c1-28a5-455a-a1d3-4bd92e1692fc/volumes" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.872812 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.883072 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e93a3a2-6818-4b61-a9d9-bba9e4927dd1" path="/var/lib/kubelet/pods/9e93a3a2-6818-4b61-a9d9-bba9e4927dd1/volumes" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.883906 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.895938 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6hwb\" (UniqueName: \"kubernetes.io/projected/e4991753-f456-4f5d-8a34-6f440f82ad8f-kube-api-access-q6hwb\") pod \"glance-default-internal-api-0\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.947279 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khxlk\" (UniqueName: \"kubernetes.io/projected/af79ed5e-0bb8-4195-b0bb-658ef8106824-kube-api-access-khxlk\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.947335 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-logs\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.947359 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-scripts\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.947412 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-config-data\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.947464 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.947497 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.947540 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:42 crc kubenswrapper[4771]: I0129 09:25:42.947562 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.029478 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.049785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khxlk\" (UniqueName: \"kubernetes.io/projected/af79ed5e-0bb8-4195-b0bb-658ef8106824-kube-api-access-khxlk\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.049839 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-logs\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.049862 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-scripts\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.049926 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-config-data\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.049992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.050017 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.050076 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.050100 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.051774 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-logs\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.052707 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.053206 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.082756 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.083362 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-scripts\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.084023 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-config-data\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.085356 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.092541 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khxlk\" (UniqueName: \"kubernetes.io/projected/af79ed5e-0bb8-4195-b0bb-658ef8106824-kube-api-access-khxlk\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.127281 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.140770 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6dfd54df77-pk6bs"] Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.142388 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.151361 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.151629 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.201813 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dfd54df77-pk6bs"] Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.254998 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-combined-ca-bundle\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.255143 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-public-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.255170 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-internal-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.255210 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-config\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.255234 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-ovndb-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.255304 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-httpd-config\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.255328 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn9kv\" (UniqueName: \"kubernetes.io/projected/f6f77513-ae83-4d90-9959-732cd517d2eb-kube-api-access-hn9kv\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.357991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-public-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.358087 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-internal-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.358172 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-config\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.358204 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-ovndb-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.358270 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-httpd-config\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.358294 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn9kv\" (UniqueName: \"kubernetes.io/projected/f6f77513-ae83-4d90-9959-732cd517d2eb-kube-api-access-hn9kv\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.358380 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-combined-ca-bundle\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.362020 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-public-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.362279 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-internal-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.363919 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-combined-ca-bundle\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.365434 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-httpd-config\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.366252 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-ovndb-tls-certs\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.374235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-config\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.385977 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn9kv\" (UniqueName: \"kubernetes.io/projected/f6f77513-ae83-4d90-9959-732cd517d2eb-kube-api-access-hn9kv\") pod \"neutron-6dfd54df77-pk6bs\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.404665 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.503863 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.506219 4771 generic.go:334] "Generic (PLEG): container finished" podID="883872f0-cb88-4095-b918-b971d8c3c0b6" containerID="79076e4a83c2a714ab820355ea704fbdde238eb57135e539a9aa3f931719370a" exitCode=0 Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.506282 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8v8h9" event={"ID":"883872f0-cb88-4095-b918-b971d8c3c0b6","Type":"ContainerDied","Data":"79076e4a83c2a714ab820355ea704fbdde238eb57135e539a9aa3f931719370a"} Jan 29 09:25:43 crc kubenswrapper[4771]: I0129 09:25:43.746214 4771 scope.go:117] "RemoveContainer" containerID="3464b853804f9d653f2579bed2e2c7dfda358fe4dc70c2f28df3b0e0e129e257" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.003119 4771 scope.go:117] "RemoveContainer" containerID="2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.103015 4771 scope.go:117] "RemoveContainer" containerID="95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.170270 4771 scope.go:117] "RemoveContainer" containerID="2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f" Jan 29 09:25:44 crc kubenswrapper[4771]: E0129 09:25:44.170844 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f\": container with ID starting with 2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f not found: ID does not exist" containerID="2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.170883 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f"} err="failed to get container status \"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f\": rpc error: code = NotFound desc = could not find container \"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f\": container with ID starting with 2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f not found: ID does not exist" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.170906 4771 scope.go:117] "RemoveContainer" containerID="95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474" Jan 29 09:25:44 crc kubenswrapper[4771]: E0129 09:25:44.172909 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474\": container with ID starting with 95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474 not found: ID does not exist" containerID="95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.172946 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474"} err="failed to get container status \"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474\": rpc error: code = NotFound desc = could not find container \"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474\": container with ID starting with 95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474 not found: ID does not exist" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.172968 4771 scope.go:117] "RemoveContainer" containerID="2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.179157 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f"} err="failed to get container status \"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f\": rpc error: code = NotFound desc = could not find container \"2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f\": container with ID starting with 2e5043cc8d5030d7a06e52f89a2005e0d04d30b7ba047f7e6a107cfc8241c56f not found: ID does not exist" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.179205 4771 scope.go:117] "RemoveContainer" containerID="95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.181838 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474"} err="failed to get container status \"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474\": rpc error: code = NotFound desc = could not find container \"95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474\": container with ID starting with 95516ee1048c1fbabed78465941509d6e66f3f80bdb072a73a26a31328dc9474 not found: ID does not exist" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.535647 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-m957m" event={"ID":"bcbc418c-ff34-4040-bfc9-2c8111568cd0","Type":"ContainerStarted","Data":"5cfe1dc3c9aeb16b1405aa53eadb3e8740ea1900ed1718290d4ece8172741e4d"} Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.537720 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.553641 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b97bcf6-xn2wg" event={"ID":"84e8bfd2-5035-43a0-80ee-c4ceed1d422c","Type":"ContainerStarted","Data":"2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde"} Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.555048 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b97bcf6-xn2wg" event={"ID":"84e8bfd2-5035-43a0-80ee-c4ceed1d422c","Type":"ContainerStarted","Data":"4af07087216cf338d96e38fd0dec8d85b4983cc45f40b350feaef7cb9b0699cc"} Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.590106 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b6c948c7-m957m" podStartSLOduration=5.59008227 podStartE2EDuration="5.59008227s" podCreationTimestamp="2026-01-29 09:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:44.573949849 +0000 UTC m=+1164.696790076" watchObservedRunningTime="2026-01-29 09:25:44.59008227 +0000 UTC m=+1164.712922487" Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.608549 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"7cbdea118a66177877708ac03440b5e1fc40ee8b5a84de73e834e8cf4a8c4572"} Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.636719 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.714418 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dfd54df77-pk6bs"] Jan 29 09:25:44 crc kubenswrapper[4771]: W0129 09:25:44.727010 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6f77513_ae83_4d90_9959_732cd517d2eb.slice/crio-ee0a6497953fc608baf95a5102519bb3565d90e10354b2edcebe697004414bc1 WatchSource:0}: Error finding container ee0a6497953fc608baf95a5102519bb3565d90e10354b2edcebe697004414bc1: Status 404 returned error can't find the container with id ee0a6497953fc608baf95a5102519bb3565d90e10354b2edcebe697004414bc1 Jan 29 09:25:44 crc kubenswrapper[4771]: I0129 09:25:44.824470 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.137535 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.228835 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883872f0-cb88-4095-b918-b971d8c3c0b6-logs\") pod \"883872f0-cb88-4095-b918-b971d8c3c0b6\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.229350 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpzqg\" (UniqueName: \"kubernetes.io/projected/883872f0-cb88-4095-b918-b971d8c3c0b6-kube-api-access-mpzqg\") pod \"883872f0-cb88-4095-b918-b971d8c3c0b6\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.229410 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-config-data\") pod \"883872f0-cb88-4095-b918-b971d8c3c0b6\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.229492 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-scripts\") pod \"883872f0-cb88-4095-b918-b971d8c3c0b6\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.229436 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883872f0-cb88-4095-b918-b971d8c3c0b6-logs" (OuterVolumeSpecName: "logs") pod "883872f0-cb88-4095-b918-b971d8c3c0b6" (UID: "883872f0-cb88-4095-b918-b971d8c3c0b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.229565 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-combined-ca-bundle\") pod \"883872f0-cb88-4095-b918-b971d8c3c0b6\" (UID: \"883872f0-cb88-4095-b918-b971d8c3c0b6\") " Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.230063 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883872f0-cb88-4095-b918-b971d8c3c0b6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.237318 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-scripts" (OuterVolumeSpecName: "scripts") pod "883872f0-cb88-4095-b918-b971d8c3c0b6" (UID: "883872f0-cb88-4095-b918-b971d8c3c0b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.250686 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883872f0-cb88-4095-b918-b971d8c3c0b6-kube-api-access-mpzqg" (OuterVolumeSpecName: "kube-api-access-mpzqg") pod "883872f0-cb88-4095-b918-b971d8c3c0b6" (UID: "883872f0-cb88-4095-b918-b971d8c3c0b6"). InnerVolumeSpecName "kube-api-access-mpzqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.277060 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-config-data" (OuterVolumeSpecName: "config-data") pod "883872f0-cb88-4095-b918-b971d8c3c0b6" (UID: "883872f0-cb88-4095-b918-b971d8c3c0b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.278814 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883872f0-cb88-4095-b918-b971d8c3c0b6" (UID: "883872f0-cb88-4095-b918-b971d8c3c0b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.332991 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpzqg\" (UniqueName: \"kubernetes.io/projected/883872f0-cb88-4095-b918-b971d8c3c0b6-kube-api-access-mpzqg\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.333048 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.333058 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.333072 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883872f0-cb88-4095-b918-b971d8c3c0b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.632350 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af79ed5e-0bb8-4195-b0bb-658ef8106824","Type":"ContainerStarted","Data":"8ec447cc190709358c7877948f027bc1d4d99b15eb34ed7d19daf33cab660f02"} Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.698666 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-659d957f7d-m6f6g"] Jan 29 09:25:45 crc kubenswrapper[4771]: E0129 09:25:45.699367 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883872f0-cb88-4095-b918-b971d8c3c0b6" containerName="placement-db-sync" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.699384 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="883872f0-cb88-4095-b918-b971d8c3c0b6" containerName="placement-db-sync" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.699572 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="883872f0-cb88-4095-b918-b971d8c3c0b6" containerName="placement-db-sync" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.700842 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.707173 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.707246 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8v8h9" event={"ID":"883872f0-cb88-4095-b918-b971d8c3c0b6","Type":"ContainerDied","Data":"7c4e5b2427cc5ed42a0dff5871e7d277e2bbd147c23d77dd19e5ea672c788cd2"} Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.707293 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c4e5b2427cc5ed42a0dff5871e7d277e2bbd147c23d77dd19e5ea672c788cd2" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.707392 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8v8h9" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.707411 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.750686 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659d957f7d-m6f6g"] Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.761815 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfd54df77-pk6bs" event={"ID":"f6f77513-ae83-4d90-9959-732cd517d2eb","Type":"ContainerStarted","Data":"a2f987ac2d3da99904f6a154d5f08a3389cc332fb9da905bc9eb9cc15e139f31"} Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.761840 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfd54df77-pk6bs" event={"ID":"f6f77513-ae83-4d90-9959-732cd517d2eb","Type":"ContainerStarted","Data":"ee0a6497953fc608baf95a5102519bb3565d90e10354b2edcebe697004414bc1"} Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.763523 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4991753-f456-4f5d-8a34-6f440f82ad8f","Type":"ContainerStarted","Data":"095735f9592cb3c5ca8339b0651f7f6387e78ea04d2ad81d28152c067e707a0e"} Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.864461 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-internal-tls-certs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.864510 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-config-data\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.864559 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpvc\" (UniqueName: \"kubernetes.io/projected/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-kube-api-access-khpvc\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.864644 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-scripts\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.864743 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-public-tls-certs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.864766 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-combined-ca-bundle\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.864791 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-logs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.967104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khpvc\" (UniqueName: \"kubernetes.io/projected/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-kube-api-access-khpvc\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.967571 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-scripts\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.967819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-public-tls-certs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.967858 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-combined-ca-bundle\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.967902 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-logs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.968011 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-internal-tls-certs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.968051 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-config-data\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.970981 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-logs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.979269 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-scripts\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.980352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-public-tls-certs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:45 crc kubenswrapper[4771]: I0129 09:25:45.980474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-config-data\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.005893 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-combined-ca-bundle\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.036573 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khpvc\" (UniqueName: \"kubernetes.io/projected/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-kube-api-access-khpvc\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.036933 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-internal-tls-certs\") pod \"placement-659d957f7d-m6f6g\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.249591 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.863131 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b97bcf6-xn2wg" event={"ID":"84e8bfd2-5035-43a0-80ee-c4ceed1d422c","Type":"ContainerStarted","Data":"6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849"} Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.864264 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.896761 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b7b97bcf6-xn2wg" podStartSLOduration=6.896745448 podStartE2EDuration="6.896745448s" podCreationTimestamp="2026-01-29 09:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:46.896262405 +0000 UTC m=+1167.019102632" watchObservedRunningTime="2026-01-29 09:25:46.896745448 +0000 UTC m=+1167.019585675" Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.914429 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.914473 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"47a7d0db7344bb741a6814223bab8ddaded98be2b1f7f0006d219e8601b7aa06"} Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.914499 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"870b232957736e20098a05b778dcb1dd6076e75c4d995b148324ec7689b89960"} Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.914510 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfd54df77-pk6bs" event={"ID":"f6f77513-ae83-4d90-9959-732cd517d2eb","Type":"ContainerStarted","Data":"302de8bf57f36136b8cbeee2e5782db40b29d879f6ec788edadb6d8c9a7b2466"} Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.938536 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4991753-f456-4f5d-8a34-6f440f82ad8f","Type":"ContainerStarted","Data":"8b8985f775d4eed7bfbf7e3f49a61b541ef3819c64eed140a67e87b56711eaed"} Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.949113 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af79ed5e-0bb8-4195-b0bb-658ef8106824","Type":"ContainerStarted","Data":"e46deb98db0755606370454eeb2e8aa8e1947e692c761c9eddfe2b0bae494804"} Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.951055 4771 generic.go:334] "Generic (PLEG): container finished" podID="015c0ccc-d729-4d0a-8168-b897f1c451da" containerID="91be09616b3fb43ae30e99014c8a1b5f10e78679d0a55fa87b96ff86e709a2ef" exitCode=0 Jan 29 09:25:46 crc kubenswrapper[4771]: I0129 09:25:46.951794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bvppf" event={"ID":"015c0ccc-d729-4d0a-8168-b897f1c451da","Type":"ContainerDied","Data":"91be09616b3fb43ae30e99014c8a1b5f10e78679d0a55fa87b96ff86e709a2ef"} Jan 29 09:25:47 crc kubenswrapper[4771]: I0129 09:25:47.010397 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659d957f7d-m6f6g"] Jan 29 09:25:47 crc kubenswrapper[4771]: I0129 09:25:47.013163 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6dfd54df77-pk6bs" podStartSLOduration=4.013146889 podStartE2EDuration="4.013146889s" podCreationTimestamp="2026-01-29 09:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:46.958069714 +0000 UTC m=+1167.080909941" watchObservedRunningTime="2026-01-29 09:25:47.013146889 +0000 UTC m=+1167.135987116" Jan 29 09:25:47 crc kubenswrapper[4771]: I0129 09:25:47.964747 4771 generic.go:334] "Generic (PLEG): container finished" podID="8a9e93f7-bada-4141-887c-4174d899b95e" containerID="c7f0a480a6332a49cb2d4e45193ef1fed5cfded327bae127694ebc276bd37041" exitCode=0 Jan 29 09:25:47 crc kubenswrapper[4771]: I0129 09:25:47.964808 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2mfct" event={"ID":"8a9e93f7-bada-4141-887c-4174d899b95e","Type":"ContainerDied","Data":"c7f0a480a6332a49cb2d4e45193ef1fed5cfded327bae127694ebc276bd37041"} Jan 29 09:25:47 crc kubenswrapper[4771]: I0129 09:25:47.975162 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"2ad4889d31dfe3dc80cc78ac7e5b20948c03eb0010984ddd3dde8036a209ce78"} Jan 29 09:25:47 crc kubenswrapper[4771]: I0129 09:25:47.980217 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af79ed5e-0bb8-4195-b0bb-658ef8106824","Type":"ContainerStarted","Data":"90f040df567e538cab9b20742f823d8c87533b8d82b17dce94f172e073431a1e"} Jan 29 09:25:47 crc kubenswrapper[4771]: I0129 09:25:47.987206 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659d957f7d-m6f6g" event={"ID":"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77","Type":"ContainerStarted","Data":"e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d"} Jan 29 09:25:47 crc kubenswrapper[4771]: I0129 09:25:47.987272 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659d957f7d-m6f6g" event={"ID":"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77","Type":"ContainerStarted","Data":"f0a128a280668b57029df2336b01457170cb923b0059fb6062210f2c20f66b9c"} Jan 29 09:25:48 crc kubenswrapper[4771]: I0129 09:25:48.029823 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.029803019 podStartE2EDuration="6.029803019s" podCreationTimestamp="2026-01-29 09:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:48.026134828 +0000 UTC m=+1168.148975075" watchObservedRunningTime="2026-01-29 09:25:48.029803019 +0000 UTC m=+1168.152643246" Jan 29 09:25:49 crc kubenswrapper[4771]: I0129 09:25:49.010404 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4991753-f456-4f5d-8a34-6f440f82ad8f","Type":"ContainerStarted","Data":"9c353bc4595acbc17a7cc233454a1abcd95f0ccb2d10ac94976139ece910ef0c"} Jan 29 09:25:49 crc kubenswrapper[4771]: I0129 09:25:49.017394 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659d957f7d-m6f6g" event={"ID":"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77","Type":"ContainerStarted","Data":"5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f"} Jan 29 09:25:49 crc kubenswrapper[4771]: I0129 09:25:49.018364 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:49 crc kubenswrapper[4771]: I0129 09:25:49.018883 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:25:49 crc kubenswrapper[4771]: I0129 09:25:49.041501 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.041464742 podStartE2EDuration="7.041464742s" podCreationTimestamp="2026-01-29 09:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:49.033491834 +0000 UTC m=+1169.156332061" watchObservedRunningTime="2026-01-29 09:25:49.041464742 +0000 UTC m=+1169.164304959" Jan 29 09:25:49 crc kubenswrapper[4771]: I0129 09:25:49.063602 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-659d957f7d-m6f6g" podStartSLOduration=4.063584087 podStartE2EDuration="4.063584087s" podCreationTimestamp="2026-01-29 09:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:49.059270789 +0000 UTC m=+1169.182111026" watchObservedRunningTime="2026-01-29 09:25:49.063584087 +0000 UTC m=+1169.186424314" Jan 29 09:25:50 crc kubenswrapper[4771]: I0129 09:25:50.408374 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:25:50 crc kubenswrapper[4771]: I0129 09:25:50.485137 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-lwtkw"] Jan 29 09:25:50 crc kubenswrapper[4771]: I0129 09:25:50.485401 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" podUID="82515d22-24f0-4673-ae59-bd788ca54a64" containerName="dnsmasq-dns" containerID="cri-o://750c7ff61470a27e421526514059d66ee5bb37bbc69f92666e71fbe1901c4c30" gracePeriod=10 Jan 29 09:25:50 crc kubenswrapper[4771]: I0129 09:25:50.602822 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:50 crc kubenswrapper[4771]: I0129 09:25:50.602938 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:25:50 crc kubenswrapper[4771]: I0129 09:25:50.856036 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:50 crc kubenswrapper[4771]: I0129 09:25:50.856327 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:25:51 crc kubenswrapper[4771]: I0129 09:25:51.040420 4771 generic.go:334] "Generic (PLEG): container finished" podID="82515d22-24f0-4673-ae59-bd788ca54a64" containerID="750c7ff61470a27e421526514059d66ee5bb37bbc69f92666e71fbe1901c4c30" exitCode=0 Jan 29 09:25:51 crc kubenswrapper[4771]: I0129 09:25:51.040514 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" event={"ID":"82515d22-24f0-4673-ae59-bd788ca54a64","Type":"ContainerDied","Data":"750c7ff61470a27e421526514059d66ee5bb37bbc69f92666e71fbe1901c4c30"} Jan 29 09:25:51 crc kubenswrapper[4771]: I0129 09:25:51.594864 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" podUID="82515d22-24f0-4673-ae59-bd788ca54a64" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.031012 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.031312 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.130171 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.139139 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.141939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bvppf" event={"ID":"015c0ccc-d729-4d0a-8168-b897f1c451da","Type":"ContainerDied","Data":"fa6f826cd0dde214f8f237d517d10a72f9b06e19fff8877612079583244df808"} Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.142089 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6f826cd0dde214f8f237d517d10a72f9b06e19fff8877612079583244df808" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.144419 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.149135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2mfct" event={"ID":"8a9e93f7-bada-4141-887c-4174d899b95e","Type":"ContainerDied","Data":"d7e17fe212a7dc4154988eb78647f7bf26f1fbd5433fe12fa0eed3a95d69832a"} Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.149219 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e17fe212a7dc4154988eb78647f7bf26f1fbd5433fe12fa0eed3a95d69832a" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.149254 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.175500 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.268465 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-config-data\") pod \"8a9e93f7-bada-4141-887c-4174d899b95e\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.268930 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46txf\" (UniqueName: \"kubernetes.io/projected/8a9e93f7-bada-4141-887c-4174d899b95e-kube-api-access-46txf\") pod \"8a9e93f7-bada-4141-887c-4174d899b95e\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.268973 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-scripts\") pod \"8a9e93f7-bada-4141-887c-4174d899b95e\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.269039 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-credential-keys\") pod \"8a9e93f7-bada-4141-887c-4174d899b95e\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.269123 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv2pn\" (UniqueName: \"kubernetes.io/projected/015c0ccc-d729-4d0a-8168-b897f1c451da-kube-api-access-nv2pn\") pod \"015c0ccc-d729-4d0a-8168-b897f1c451da\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.269146 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-fernet-keys\") pod \"8a9e93f7-bada-4141-887c-4174d899b95e\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.269236 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-combined-ca-bundle\") pod \"8a9e93f7-bada-4141-887c-4174d899b95e\" (UID: \"8a9e93f7-bada-4141-887c-4174d899b95e\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.269275 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-combined-ca-bundle\") pod \"015c0ccc-d729-4d0a-8168-b897f1c451da\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.269298 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-db-sync-config-data\") pod \"015c0ccc-d729-4d0a-8168-b897f1c451da\" (UID: \"015c0ccc-d729-4d0a-8168-b897f1c451da\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.289226 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8a9e93f7-bada-4141-887c-4174d899b95e" (UID: "8a9e93f7-bada-4141-887c-4174d899b95e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.306382 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015c0ccc-d729-4d0a-8168-b897f1c451da-kube-api-access-nv2pn" (OuterVolumeSpecName: "kube-api-access-nv2pn") pod "015c0ccc-d729-4d0a-8168-b897f1c451da" (UID: "015c0ccc-d729-4d0a-8168-b897f1c451da"). InnerVolumeSpecName "kube-api-access-nv2pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.306433 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "015c0ccc-d729-4d0a-8168-b897f1c451da" (UID: "015c0ccc-d729-4d0a-8168-b897f1c451da"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.309842 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-scripts" (OuterVolumeSpecName: "scripts") pod "8a9e93f7-bada-4141-887c-4174d899b95e" (UID: "8a9e93f7-bada-4141-887c-4174d899b95e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.313458 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8a9e93f7-bada-4141-887c-4174d899b95e" (UID: "8a9e93f7-bada-4141-887c-4174d899b95e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.313932 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9e93f7-bada-4141-887c-4174d899b95e-kube-api-access-46txf" (OuterVolumeSpecName: "kube-api-access-46txf") pod "8a9e93f7-bada-4141-887c-4174d899b95e" (UID: "8a9e93f7-bada-4141-887c-4174d899b95e"). InnerVolumeSpecName "kube-api-access-46txf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.335232 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-config-data" (OuterVolumeSpecName: "config-data") pod "8a9e93f7-bada-4141-887c-4174d899b95e" (UID: "8a9e93f7-bada-4141-887c-4174d899b95e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.357556 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a9e93f7-bada-4141-887c-4174d899b95e" (UID: "8a9e93f7-bada-4141-887c-4174d899b95e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.373228 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.373256 4771 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.373269 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv2pn\" (UniqueName: \"kubernetes.io/projected/015c0ccc-d729-4d0a-8168-b897f1c451da-kube-api-access-nv2pn\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.373278 4771 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.373288 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.373297 4771 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.373306 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a9e93f7-bada-4141-887c-4174d899b95e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.373318 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46txf\" (UniqueName: \"kubernetes.io/projected/8a9e93f7-bada-4141-887c-4174d899b95e-kube-api-access-46txf\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.380678 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "015c0ccc-d729-4d0a-8168-b897f1c451da" (UID: "015c0ccc-d729-4d0a-8168-b897f1c451da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.406534 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.406587 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.473943 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.475287 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015c0ccc-d729-4d0a-8168-b897f1c451da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.489855 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.490852 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.576875 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-nb\") pod \"82515d22-24f0-4673-ae59-bd788ca54a64\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.577298 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wx9n\" (UniqueName: \"kubernetes.io/projected/82515d22-24f0-4673-ae59-bd788ca54a64-kube-api-access-7wx9n\") pod \"82515d22-24f0-4673-ae59-bd788ca54a64\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.577340 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-sb\") pod \"82515d22-24f0-4673-ae59-bd788ca54a64\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.577384 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-dns-svc\") pod \"82515d22-24f0-4673-ae59-bd788ca54a64\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.577619 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-config\") pod \"82515d22-24f0-4673-ae59-bd788ca54a64\" (UID: \"82515d22-24f0-4673-ae59-bd788ca54a64\") " Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.594008 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82515d22-24f0-4673-ae59-bd788ca54a64-kube-api-access-7wx9n" (OuterVolumeSpecName: "kube-api-access-7wx9n") pod "82515d22-24f0-4673-ae59-bd788ca54a64" (UID: "82515d22-24f0-4673-ae59-bd788ca54a64"). InnerVolumeSpecName "kube-api-access-7wx9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.646916 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82515d22-24f0-4673-ae59-bd788ca54a64" (UID: "82515d22-24f0-4673-ae59-bd788ca54a64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.647070 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-config" (OuterVolumeSpecName: "config") pod "82515d22-24f0-4673-ae59-bd788ca54a64" (UID: "82515d22-24f0-4673-ae59-bd788ca54a64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.650981 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82515d22-24f0-4673-ae59-bd788ca54a64" (UID: "82515d22-24f0-4673-ae59-bd788ca54a64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.662787 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82515d22-24f0-4673-ae59-bd788ca54a64" (UID: "82515d22-24f0-4673-ae59-bd788ca54a64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.681097 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.681133 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.681145 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wx9n\" (UniqueName: \"kubernetes.io/projected/82515d22-24f0-4673-ae59-bd788ca54a64-kube-api-access-7wx9n\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.681278 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:53 crc kubenswrapper[4771]: I0129 09:25:53.681290 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82515d22-24f0-4673-ae59-bd788ca54a64-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.160844 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe","Type":"ContainerStarted","Data":"075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5"} Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.163945 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" event={"ID":"82515d22-24f0-4673-ae59-bd788ca54a64","Type":"ContainerDied","Data":"d1fa4f70527aed561fab52e58ac200ae0ff5e9d3b5c698bcb2f025e740dbc129"} Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.164000 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-lwtkw" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.164028 4771 scope.go:117] "RemoveContainer" containerID="750c7ff61470a27e421526514059d66ee5bb37bbc69f92666e71fbe1901c4c30" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.164105 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bvppf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.164156 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2mfct" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.166613 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.166647 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.166669 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.210041 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-lwtkw"] Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.217719 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-lwtkw"] Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.363713 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-67ddc4bf8b-n46xf"] Jan 29 09:25:54 crc kubenswrapper[4771]: E0129 09:25:54.368815 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82515d22-24f0-4673-ae59-bd788ca54a64" containerName="dnsmasq-dns" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.368851 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="82515d22-24f0-4673-ae59-bd788ca54a64" containerName="dnsmasq-dns" Jan 29 09:25:54 crc kubenswrapper[4771]: E0129 09:25:54.368867 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015c0ccc-d729-4d0a-8168-b897f1c451da" containerName="barbican-db-sync" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.368876 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="015c0ccc-d729-4d0a-8168-b897f1c451da" containerName="barbican-db-sync" Jan 29 09:25:54 crc kubenswrapper[4771]: E0129 09:25:54.368888 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82515d22-24f0-4673-ae59-bd788ca54a64" containerName="init" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.368894 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="82515d22-24f0-4673-ae59-bd788ca54a64" containerName="init" Jan 29 09:25:54 crc kubenswrapper[4771]: E0129 09:25:54.368907 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9e93f7-bada-4141-887c-4174d899b95e" containerName="keystone-bootstrap" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.368915 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9e93f7-bada-4141-887c-4174d899b95e" containerName="keystone-bootstrap" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.369148 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="82515d22-24f0-4673-ae59-bd788ca54a64" containerName="dnsmasq-dns" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.369163 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="015c0ccc-d729-4d0a-8168-b897f1c451da" containerName="barbican-db-sync" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.369171 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a9e93f7-bada-4141-887c-4174d899b95e" containerName="keystone-bootstrap" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.369909 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.374154 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.374437 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-497zd" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.374646 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.374785 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.374872 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.375142 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.376364 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67ddc4bf8b-n46xf"] Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.396148 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-internal-tls-certs\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.396227 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-public-tls-certs\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.396256 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-fernet-keys\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.396351 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrtm\" (UniqueName: \"kubernetes.io/projected/f202e04b-5581-45cc-9b76-da029ee47b31-kube-api-access-slrtm\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.396544 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-combined-ca-bundle\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.396643 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-scripts\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.396808 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-credential-keys\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.396873 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-config-data\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.499083 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-credential-keys\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.499154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-config-data\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.499197 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-internal-tls-certs\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.499218 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-public-tls-certs\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.499237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-fernet-keys\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.499267 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrtm\" (UniqueName: \"kubernetes.io/projected/f202e04b-5581-45cc-9b76-da029ee47b31-kube-api-access-slrtm\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.499325 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-combined-ca-bundle\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.499351 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-scripts\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.504401 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-credential-keys\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.508525 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-internal-tls-certs\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.509320 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-config-data\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.509819 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-fernet-keys\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.510380 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-combined-ca-bundle\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.521074 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-scripts\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.521522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f202e04b-5581-45cc-9b76-da029ee47b31-public-tls-certs\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.608891 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrtm\" (UniqueName: \"kubernetes.io/projected/f202e04b-5581-45cc-9b76-da029ee47b31-kube-api-access-slrtm\") pod \"keystone-67ddc4bf8b-n46xf\" (UID: \"f202e04b-5581-45cc-9b76-da029ee47b31\") " pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.639921 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8df94697f-2fq9x"] Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.641927 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.645366 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2tx8m" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.645678 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.648861 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.675259 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8df94697f-2fq9x"] Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.686880 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.700839 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7cf746d778-4gpwb"] Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.716255 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.721080 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.757342 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data-custom\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.757415 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d8lq\" (UniqueName: \"kubernetes.io/projected/4f3a9db6-57bc-4625-a2de-29c8ba725e10-kube-api-access-7d8lq\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.757613 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-combined-ca-bundle\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.757659 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.757835 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f3a9db6-57bc-4625-a2de-29c8ba725e10-logs\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.796031 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cf746d778-4gpwb"] Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.873012 4771 scope.go:117] "RemoveContainer" containerID="2f8bf43571910b0facc40d2db89513ca5af9aa16c346c96844fba970de57499d" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.914416 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data-custom\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.914533 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d8lq\" (UniqueName: \"kubernetes.io/projected/4f3a9db6-57bc-4625-a2de-29c8ba725e10-kube-api-access-7d8lq\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.914594 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.914965 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-combined-ca-bundle\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.915010 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.915122 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f3a9db6-57bc-4625-a2de-29c8ba725e10-logs\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.915159 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data-custom\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.915194 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmxm2\" (UniqueName: \"kubernetes.io/projected/2a874d0d-803a-4aaf-85b1-a2584fc5a751-kube-api-access-xmxm2\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.915263 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-combined-ca-bundle\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.915305 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a874d0d-803a-4aaf-85b1-a2584fc5a751-logs\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.917238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f3a9db6-57bc-4625-a2de-29c8ba725e10-logs\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.919577 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82515d22-24f0-4673-ae59-bd788ca54a64" path="/var/lib/kubelet/pods/82515d22-24f0-4673-ae59-bd788ca54a64/volumes" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.935412 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data-custom\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.931739 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-combined-ca-bundle\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.985936 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5575df4f89-jlhn2"] Jan 29 09:25:54 crc kubenswrapper[4771]: I0129 09:25:54.994806 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.010677 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d8lq\" (UniqueName: \"kubernetes.io/projected/4f3a9db6-57bc-4625-a2de-29c8ba725e10-kube-api-access-7d8lq\") pod \"barbican-worker-8df94697f-2fq9x\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.011212 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.019851 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.021488 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data-custom\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.021619 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmxm2\" (UniqueName: \"kubernetes.io/projected/2a874d0d-803a-4aaf-85b1-a2584fc5a751-kube-api-access-xmxm2\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.021764 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-combined-ca-bundle\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.021894 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a874d0d-803a-4aaf-85b1-a2584fc5a751-logs\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.022482 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a874d0d-803a-4aaf-85b1-a2584fc5a751-logs\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.036282 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-nqf55"] Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.037798 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.040454 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.055582 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data-custom\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.055631 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.063974 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5575df4f89-jlhn2"] Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.067405 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-combined-ca-bundle\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.101280 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmxm2\" (UniqueName: \"kubernetes.io/projected/2a874d0d-803a-4aaf-85b1-a2584fc5a751-kube-api-access-xmxm2\") pod \"barbican-keystone-listener-7cf746d778-4gpwb\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.102665 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.104876 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-nqf55"] Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.138202 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6c444788cd-vwtrs"] Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.158426 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.177753 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6c444788cd-vwtrs"] Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.198880 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74b56b4686-z8692"] Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.201034 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.211772 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.212998 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74b56b4686-z8692"] Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.233308 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcxl9\" (UniqueName: \"kubernetes.io/projected/0de35acf-081e-4511-8445-dd3e1d7ead0e-kube-api-access-xcxl9\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.233778 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-config-data\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.233950 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-dns-svc\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.234066 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.234366 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-config\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.234548 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.234669 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bef7ac33-b62c-4372-a3f2-98b951265ef3-logs\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.234862 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.236203 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6hmk\" (UniqueName: \"kubernetes.io/projected/bef7ac33-b62c-4372-a3f2-98b951265ef3-kube-api-access-h6hmk\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.236349 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-combined-ca-bundle\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.236463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-config-data-custom\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338420 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6hmk\" (UniqueName: \"kubernetes.io/projected/bef7ac33-b62c-4372-a3f2-98b951265ef3-kube-api-access-h6hmk\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-combined-ca-bundle\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338562 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-combined-ca-bundle\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338592 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-config-data-custom\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338650 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcxl9\" (UniqueName: \"kubernetes.io/projected/0de35acf-081e-4511-8445-dd3e1d7ead0e-kube-api-access-xcxl9\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338675 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-combined-ca-bundle\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338729 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-config-data-custom\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338781 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-config-data\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338815 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-dns-svc\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338844 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338878 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66c0206a-2567-49ef-b02a-016a97c6e057-logs\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338970 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f4c\" (UniqueName: \"kubernetes.io/projected/49846e1c-6efb-4d4f-875c-ab051d11de09-kube-api-access-66f4c\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.338998 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.339059 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-config\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.339105 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49846e1c-6efb-4d4f-875c-ab051d11de09-logs\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.339130 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.339175 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bef7ac33-b62c-4372-a3f2-98b951265ef3-logs\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.339224 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data-custom\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.339257 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-config-data\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.339298 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4thx\" (UniqueName: \"kubernetes.io/projected/66c0206a-2567-49ef-b02a-016a97c6e057-kube-api-access-f4thx\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.343131 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-dns-svc\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.344673 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.346056 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-config\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.349639 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bef7ac33-b62c-4372-a3f2-98b951265ef3-logs\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.352000 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.355955 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-combined-ca-bundle\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.368088 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-config-data\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.368830 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bef7ac33-b62c-4372-a3f2-98b951265ef3-config-data-custom\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.369633 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6hmk\" (UniqueName: \"kubernetes.io/projected/bef7ac33-b62c-4372-a3f2-98b951265ef3-kube-api-access-h6hmk\") pod \"barbican-worker-5575df4f89-jlhn2\" (UID: \"bef7ac33-b62c-4372-a3f2-98b951265ef3\") " pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.375237 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcxl9\" (UniqueName: \"kubernetes.io/projected/0de35acf-081e-4511-8445-dd3e1d7ead0e-kube-api-access-xcxl9\") pod \"dnsmasq-dns-798d46d59c-nqf55\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441202 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66c0206a-2567-49ef-b02a-016a97c6e057-logs\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441304 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441336 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66f4c\" (UniqueName: \"kubernetes.io/projected/49846e1c-6efb-4d4f-875c-ab051d11de09-kube-api-access-66f4c\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441388 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49846e1c-6efb-4d4f-875c-ab051d11de09-logs\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441441 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data-custom\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441465 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-config-data\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441490 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4thx\" (UniqueName: \"kubernetes.io/projected/66c0206a-2567-49ef-b02a-016a97c6e057-kube-api-access-f4thx\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441528 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-combined-ca-bundle\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441572 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-combined-ca-bundle\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.441592 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-config-data-custom\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.450527 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66c0206a-2567-49ef-b02a-016a97c6e057-logs\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.451009 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49846e1c-6efb-4d4f-875c-ab051d11de09-logs\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.497439 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-combined-ca-bundle\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.498209 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-combined-ca-bundle\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.498277 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-config-data\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.498296 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49846e1c-6efb-4d4f-875c-ab051d11de09-config-data-custom\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.498374 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data-custom\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.532394 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66f4c\" (UniqueName: \"kubernetes.io/projected/49846e1c-6efb-4d4f-875c-ab051d11de09-kube-api-access-66f4c\") pod \"barbican-keystone-listener-6c444788cd-vwtrs\" (UID: \"49846e1c-6efb-4d4f-875c-ab051d11de09\") " pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.532513 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4thx\" (UniqueName: \"kubernetes.io/projected/66c0206a-2567-49ef-b02a-016a97c6e057-kube-api-access-f4thx\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.533526 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data\") pod \"barbican-api-74b56b4686-z8692\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.574346 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5575df4f89-jlhn2" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.594932 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.604422 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.612066 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:55 crc kubenswrapper[4771]: I0129 09:25:55.956943 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8df94697f-2fq9x"] Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.177344 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67ddc4bf8b-n46xf"] Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.228068 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cf746d778-4gpwb"] Jan 29 09:25:56 crc kubenswrapper[4771]: W0129 09:25:56.309887 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf202e04b_5581_45cc_9b76_da029ee47b31.slice/crio-4a2f6858d9372ef029efe31a289ba4178bdd4520039161a25efb723125ba8ab3 WatchSource:0}: Error finding container 4a2f6858d9372ef029efe31a289ba4178bdd4520039161a25efb723125ba8ab3: Status 404 returned error can't find the container with id 4a2f6858d9372ef029efe31a289ba4178bdd4520039161a25efb723125ba8ab3 Jan 29 09:25:56 crc kubenswrapper[4771]: W0129 09:25:56.347594 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a874d0d_803a_4aaf_85b1_a2584fc5a751.slice/crio-4a79bbc89b96d8ab5a487c7e39dcda491fc417bf8b1e5ddbf7488e14da4bf3e0 WatchSource:0}: Error finding container 4a79bbc89b96d8ab5a487c7e39dcda491fc417bf8b1e5ddbf7488e14da4bf3e0: Status 404 returned error can't find the container with id 4a79bbc89b96d8ab5a487c7e39dcda491fc417bf8b1e5ddbf7488e14da4bf3e0 Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.347877 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"3a238fb8297f8e4e902465decc79eb886f12886e1c07567d8b9fb05f29447d9c"} Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.364363 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.364385 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.364607 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df94697f-2fq9x" event={"ID":"4f3a9db6-57bc-4625-a2de-29c8ba725e10","Type":"ContainerStarted","Data":"7ff5b076b0e5dae31f8b9e619ba6ef21414334d4336bb9282ae0cce3c55f90bf"} Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.364675 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.364723 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.401021 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5575df4f89-jlhn2"] Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.580176 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-nqf55"] Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.700181 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6c444788cd-vwtrs"] Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.796063 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74b56b4686-z8692"] Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.895086 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:56 crc kubenswrapper[4771]: I0129 09:25:56.895137 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.411523 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" event={"ID":"49846e1c-6efb-4d4f-875c-ab051d11de09","Type":"ContainerStarted","Data":"bebbdd951d12c8975fcbe01f752ee3b33162a347a7648ea090fba6042a46bb11"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.436109 4771 generic.go:334] "Generic (PLEG): container finished" podID="0de35acf-081e-4511-8445-dd3e1d7ead0e" containerID="28e105f96cdaa921bec35cb15b5b7737cc1cf5138d96eb3bef419b1a254e28bb" exitCode=0 Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.436219 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" event={"ID":"0de35acf-081e-4511-8445-dd3e1d7ead0e","Type":"ContainerDied","Data":"28e105f96cdaa921bec35cb15b5b7737cc1cf5138d96eb3bef419b1a254e28bb"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.436258 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" event={"ID":"0de35acf-081e-4511-8445-dd3e1d7ead0e","Type":"ContainerStarted","Data":"9332cd9cf3edf4b31981d98fdf49314f2523b00863091880ee10d92e0195e74b"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.452818 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" event={"ID":"2a874d0d-803a-4aaf-85b1-a2584fc5a751","Type":"ContainerStarted","Data":"4a79bbc89b96d8ab5a487c7e39dcda491fc417bf8b1e5ddbf7488e14da4bf3e0"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.484421 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67ddc4bf8b-n46xf" event={"ID":"f202e04b-5581-45cc-9b76-da029ee47b31","Type":"ContainerStarted","Data":"59ee0115129e909f62f0e6d62dae1200db8e1435e7c8294be86b3e349f22f09d"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.484883 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67ddc4bf8b-n46xf" event={"ID":"f202e04b-5581-45cc-9b76-da029ee47b31","Type":"ContainerStarted","Data":"4a2f6858d9372ef029efe31a289ba4178bdd4520039161a25efb723125ba8ab3"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.488346 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.515313 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5575df4f89-jlhn2" event={"ID":"bef7ac33-b62c-4372-a3f2-98b951265ef3","Type":"ContainerStarted","Data":"d7a3eaa152ba6a8755184dc66f7a103d926d857fe9d8b88e75d01bee00fe8e31"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.569034 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g2fcd" event={"ID":"29ab9c1d-3798-4151-bf0b-63227f0e45a4","Type":"ContainerStarted","Data":"ee279fb77130501382f07268d88b8252be9bf08b614342e0da6c83af22e3313b"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.579350 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-67ddc4bf8b-n46xf" podStartSLOduration=3.579320595 podStartE2EDuration="3.579320595s" podCreationTimestamp="2026-01-29 09:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:57.515579683 +0000 UTC m=+1177.638419920" watchObservedRunningTime="2026-01-29 09:25:57.579320595 +0000 UTC m=+1177.702160822" Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.627656 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-g2fcd" podStartSLOduration=4.357162059 podStartE2EDuration="57.627635255s" podCreationTimestamp="2026-01-29 09:25:00 +0000 UTC" firstStartedPulling="2026-01-29 09:25:01.854413353 +0000 UTC m=+1121.977253580" lastFinishedPulling="2026-01-29 09:25:55.124886549 +0000 UTC m=+1175.247726776" observedRunningTime="2026-01-29 09:25:57.609578172 +0000 UTC m=+1177.732418409" watchObservedRunningTime="2026-01-29 09:25:57.627635255 +0000 UTC m=+1177.750475482" Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.647904 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"42c09d9572e9304db1360299b976b07e0a354b1880f57d4d7022496cd4b1492a"} Jan 29 09:25:57 crc kubenswrapper[4771]: I0129 09:25:57.651771 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b56b4686-z8692" event={"ID":"66c0206a-2567-49ef-b02a-016a97c6e057","Type":"ContainerStarted","Data":"9ace6d004b679257753abc6075bf8380923e4e5e4a4775e979b974fd96efc713"} Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.434041 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.440574 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.541058 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7484874686-s4fjd"] Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.543904 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.556781 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.557077 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.578678 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7484874686-s4fjd"] Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.651926 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-config-data-custom\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.651996 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-internal-tls-certs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.652020 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d2516d-24c8-400e-acd4-d35b384046bb-logs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.652047 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-public-tls-certs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.652076 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7pbx\" (UniqueName: \"kubernetes.io/projected/25d2516d-24c8-400e-acd4-d35b384046bb-kube-api-access-m7pbx\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.652124 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-config-data\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.652185 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-combined-ca-bundle\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.684865 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"4e64a2c753ba61fe86049816bde3f2d30d3781447404d5b0d1dfe788f2795137"} Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.684931 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"d50f6ba84a50353092424a00113979e92e074b09068971a08d442c90f55c1c91"} Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.687831 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b56b4686-z8692" event={"ID":"66c0206a-2567-49ef-b02a-016a97c6e057","Type":"ContainerStarted","Data":"ce930545bd79125f43c2afe811a410c812d3c90988c7d7a2818cf2a59e3aeef3"} Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.687877 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b56b4686-z8692" event={"ID":"66c0206a-2567-49ef-b02a-016a97c6e057","Type":"ContainerStarted","Data":"f4ad55f1edc90667256abedbc27ef80ca1081e5d9f8d8273247c8bf1698e793f"} Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.687937 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.687986 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.695792 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" event={"ID":"0de35acf-081e-4511-8445-dd3e1d7ead0e","Type":"ContainerStarted","Data":"4b593668ee323d77a62a50829c845533b664528d4d107125468161cb01b9ddbb"} Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.695852 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.735799 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74b56b4686-z8692" podStartSLOduration=4.735778315 podStartE2EDuration="4.735778315s" podCreationTimestamp="2026-01-29 09:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:58.714326399 +0000 UTC m=+1178.837166636" watchObservedRunningTime="2026-01-29 09:25:58.735778315 +0000 UTC m=+1178.858618542" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.756047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-internal-tls-certs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.756099 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d2516d-24c8-400e-acd4-d35b384046bb-logs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.756141 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-public-tls-certs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.756172 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7pbx\" (UniqueName: \"kubernetes.io/projected/25d2516d-24c8-400e-acd4-d35b384046bb-kube-api-access-m7pbx\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.756285 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-config-data\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.756453 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-combined-ca-bundle\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.756667 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-config-data-custom\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.756888 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d2516d-24c8-400e-acd4-d35b384046bb-logs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.766477 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" podStartSLOduration=4.7664531530000005 podStartE2EDuration="4.766453153s" podCreationTimestamp="2026-01-29 09:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:25:58.746764295 +0000 UTC m=+1178.869604512" watchObservedRunningTime="2026-01-29 09:25:58.766453153 +0000 UTC m=+1178.889293380" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.768574 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-config-data-custom\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.771198 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-public-tls-certs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.772124 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-config-data\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.777473 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-combined-ca-bundle\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.781945 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d2516d-24c8-400e-acd4-d35b384046bb-internal-tls-certs\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.795576 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7pbx\" (UniqueName: \"kubernetes.io/projected/25d2516d-24c8-400e-acd4-d35b384046bb-kube-api-access-m7pbx\") pod \"barbican-api-7484874686-s4fjd\" (UID: \"25d2516d-24c8-400e-acd4-d35b384046bb\") " pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:58 crc kubenswrapper[4771]: I0129 09:25:58.900215 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:25:59 crc kubenswrapper[4771]: I0129 09:25:59.084306 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:26:00 crc kubenswrapper[4771]: I0129 09:26:00.632090 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6d5dc7fbb8-8h9gn" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 09:26:00 crc kubenswrapper[4771]: I0129 09:26:00.858787 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67d9579b5b-l9trm" podUID="3d093a30-424c-4a0c-a749-7a47328c4b2d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 29 09:26:01 crc kubenswrapper[4771]: I0129 09:26:01.362873 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7484874686-s4fjd"] Jan 29 09:26:01 crc kubenswrapper[4771]: I0129 09:26:01.769253 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5575df4f89-jlhn2" event={"ID":"bef7ac33-b62c-4372-a3f2-98b951265ef3","Type":"ContainerStarted","Data":"e7a1d574946ef675c691a90f647b24dddff1ce8564b7324e1885966952a1e4c3"} Jan 29 09:26:01 crc kubenswrapper[4771]: I0129 09:26:01.810849 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"875c1493134e1f97e616745fa3d0108f11daef30706ea0d096836ea76a7e1822"} Jan 29 09:26:01 crc kubenswrapper[4771]: I0129 09:26:01.812888 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" event={"ID":"49846e1c-6efb-4d4f-875c-ab051d11de09","Type":"ContainerStarted","Data":"55d6323fd15e54254d28e39f520a91afee3dedb0fd5ea2caed6787755119128f"} Jan 29 09:26:01 crc kubenswrapper[4771]: I0129 09:26:01.815224 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df94697f-2fq9x" event={"ID":"4f3a9db6-57bc-4625-a2de-29c8ba725e10","Type":"ContainerStarted","Data":"3e56356839a5c9c5aed52851f8d3f997e127c9e013d5cd8adce03ec653ab7474"} Jan 29 09:26:01 crc kubenswrapper[4771]: I0129 09:26:01.818069 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7484874686-s4fjd" event={"ID":"25d2516d-24c8-400e-acd4-d35b384046bb","Type":"ContainerStarted","Data":"a1a23ac5838065c1f1685bd25cf2c441ca8dae65554d13ddf35ee8de9da9c535"} Jan 29 09:26:01 crc kubenswrapper[4771]: I0129 09:26:01.818173 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7484874686-s4fjd" event={"ID":"25d2516d-24c8-400e-acd4-d35b384046bb","Type":"ContainerStarted","Data":"79370f303b1989daaa3aa0c6420a9326fc1ba0a87d6569ce81b9bcf7387f537b"} Jan 29 09:26:01 crc kubenswrapper[4771]: I0129 09:26:01.821270 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" event={"ID":"2a874d0d-803a-4aaf-85b1-a2584fc5a751","Type":"ContainerStarted","Data":"26f886cd01b2fbb69f90a06a13ee7c9b278a0066e0aee9f68713231565fa4828"} Jan 29 09:26:02 crc kubenswrapper[4771]: I0129 09:26:02.879435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" event={"ID":"2a874d0d-803a-4aaf-85b1-a2584fc5a751","Type":"ContainerStarted","Data":"5382debfb05be994eef710db25b72b425bb6c8c470e9baaffdcb055b98e51aaa"} Jan 29 09:26:02 crc kubenswrapper[4771]: I0129 09:26:02.905752 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5575df4f89-jlhn2" event={"ID":"bef7ac33-b62c-4372-a3f2-98b951265ef3","Type":"ContainerStarted","Data":"e08a2cced187f43e84ae135d9f104dc6173826ce44ae01e56197360ea6731c70"} Jan 29 09:26:02 crc kubenswrapper[4771]: I0129 09:26:02.910138 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" podStartSLOduration=4.2858581319999995 podStartE2EDuration="8.910110707s" podCreationTimestamp="2026-01-29 09:25:54 +0000 UTC" firstStartedPulling="2026-01-29 09:25:56.363932756 +0000 UTC m=+1176.486772983" lastFinishedPulling="2026-01-29 09:26:00.988185331 +0000 UTC m=+1181.111025558" observedRunningTime="2026-01-29 09:26:02.902013666 +0000 UTC m=+1183.024853903" watchObservedRunningTime="2026-01-29 09:26:02.910110707 +0000 UTC m=+1183.032950934" Jan 29 09:26:02 crc kubenswrapper[4771]: I0129 09:26:02.949126 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5575df4f89-jlhn2" podStartSLOduration=4.366445024 podStartE2EDuration="8.949104353s" podCreationTimestamp="2026-01-29 09:25:54 +0000 UTC" firstStartedPulling="2026-01-29 09:25:56.415765612 +0000 UTC m=+1176.538605839" lastFinishedPulling="2026-01-29 09:26:00.998424941 +0000 UTC m=+1181.121265168" observedRunningTime="2026-01-29 09:26:02.935420929 +0000 UTC m=+1183.058261146" watchObservedRunningTime="2026-01-29 09:26:02.949104353 +0000 UTC m=+1183.071944580" Jan 29 09:26:02 crc kubenswrapper[4771]: I0129 09:26:02.951028 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"a3ea658f84fba15da515f1eb6ec2346db32bd860987c4ffbb904ed1276ae1423"} Jan 29 09:26:02 crc kubenswrapper[4771]: I0129 09:26:02.951076 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"bdcd90471d9ac4fdc4b522c07633b01105f5efb12b5cc6aeaed45884cc57f364"} Jan 29 09:26:02 crc kubenswrapper[4771]: I0129 09:26:02.994634 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" event={"ID":"49846e1c-6efb-4d4f-875c-ab051d11de09","Type":"ContainerStarted","Data":"d59bb7c49660ba0698e6c0907e4baf15ad2667b131a8e836779732b617601f92"} Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.014644 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-8df94697f-2fq9x"] Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.016928 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df94697f-2fq9x" event={"ID":"4f3a9db6-57bc-4625-a2de-29c8ba725e10","Type":"ContainerStarted","Data":"354b820fb836e7e1773668337c4da3f7edbaea603e4e508869f3abe56328ac5b"} Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.032207 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7484874686-s4fjd" event={"ID":"25d2516d-24c8-400e-acd4-d35b384046bb","Type":"ContainerStarted","Data":"8cb7dda0ebee1d417fa8d7f1fa186b6d592773df9bbf62ace1a772ecb2e11413"} Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.033176 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.033207 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.036242 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6c444788cd-vwtrs" podStartSLOduration=4.769607331 podStartE2EDuration="9.036211903s" podCreationTimestamp="2026-01-29 09:25:54 +0000 UTC" firstStartedPulling="2026-01-29 09:25:56.72199834 +0000 UTC m=+1176.844838567" lastFinishedPulling="2026-01-29 09:26:00.988602912 +0000 UTC m=+1181.111443139" observedRunningTime="2026-01-29 09:26:03.030541398 +0000 UTC m=+1183.153381635" watchObservedRunningTime="2026-01-29 09:26:03.036211903 +0000 UTC m=+1183.159052130" Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.057138 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8df94697f-2fq9x" podStartSLOduration=4.183117144 podStartE2EDuration="9.057115434s" podCreationTimestamp="2026-01-29 09:25:54 +0000 UTC" firstStartedPulling="2026-01-29 09:25:56.114185921 +0000 UTC m=+1176.237026148" lastFinishedPulling="2026-01-29 09:26:00.988184211 +0000 UTC m=+1181.111024438" observedRunningTime="2026-01-29 09:26:03.054736949 +0000 UTC m=+1183.177577196" watchObservedRunningTime="2026-01-29 09:26:03.057115434 +0000 UTC m=+1183.179955661" Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.090793 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7cf746d778-4gpwb"] Jan 29 09:26:03 crc kubenswrapper[4771]: I0129 09:26:03.098842 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7484874686-s4fjd" podStartSLOduration=5.098798543 podStartE2EDuration="5.098798543s" podCreationTimestamp="2026-01-29 09:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:03.080205855 +0000 UTC m=+1183.203046082" watchObservedRunningTime="2026-01-29 09:26:03.098798543 +0000 UTC m=+1183.221638770" Jan 29 09:26:04 crc kubenswrapper[4771]: I0129 09:26:04.060486 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"3dcdda56695cdd313a4f00f3cc9f6ac0029d805e0e00afecdb76ef24c581acc8"} Jan 29 09:26:04 crc kubenswrapper[4771]: I0129 09:26:04.061067 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"24553eca45b67109902f5ea0630298272c54a625f1baf4686c3a9a6f2b0a7a5d"} Jan 29 09:26:04 crc kubenswrapper[4771]: I0129 09:26:04.061092 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"c17a7045e7cebe40023bebdeec3857185ac8c119da27f7414776637d63ff1988"} Jan 29 09:26:04 crc kubenswrapper[4771]: I0129 09:26:04.062449 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-8df94697f-2fq9x" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerName="barbican-worker" containerID="cri-o://354b820fb836e7e1773668337c4da3f7edbaea603e4e508869f3abe56328ac5b" gracePeriod=30 Jan 29 09:26:04 crc kubenswrapper[4771]: I0129 09:26:04.062260 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-8df94697f-2fq9x" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerName="barbican-worker-log" containerID="cri-o://3e56356839a5c9c5aed52851f8d3f997e127c9e013d5cd8adce03ec653ab7474" gracePeriod=30 Jan 29 09:26:05 crc kubenswrapper[4771]: I0129 09:26:05.073165 4771 generic.go:334] "Generic (PLEG): container finished" podID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerID="3e56356839a5c9c5aed52851f8d3f997e127c9e013d5cd8adce03ec653ab7474" exitCode=143 Jan 29 09:26:05 crc kubenswrapper[4771]: I0129 09:26:05.073274 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df94697f-2fq9x" event={"ID":"4f3a9db6-57bc-4625-a2de-29c8ba725e10","Type":"ContainerDied","Data":"3e56356839a5c9c5aed52851f8d3f997e127c9e013d5cd8adce03ec653ab7474"} Jan 29 09:26:05 crc kubenswrapper[4771]: I0129 09:26:05.074349 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerName="barbican-keystone-listener" containerID="cri-o://5382debfb05be994eef710db25b72b425bb6c8c470e9baaffdcb055b98e51aaa" gracePeriod=30 Jan 29 09:26:05 crc kubenswrapper[4771]: I0129 09:26:05.074292 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerName="barbican-keystone-listener-log" containerID="cri-o://26f886cd01b2fbb69f90a06a13ee7c9b278a0066e0aee9f68713231565fa4828" gracePeriod=30 Jan 29 09:26:05 crc kubenswrapper[4771]: I0129 09:26:05.597975 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:26:05 crc kubenswrapper[4771]: I0129 09:26:05.682708 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-m957m"] Jan 29 09:26:05 crc kubenswrapper[4771]: I0129 09:26:05.683049 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b6c948c7-m957m" podUID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerName="dnsmasq-dns" containerID="cri-o://5cfe1dc3c9aeb16b1405aa53eadb3e8740ea1900ed1718290d4ece8172741e4d" gracePeriod=10 Jan 29 09:26:06 crc kubenswrapper[4771]: I0129 09:26:06.098241 4771 generic.go:334] "Generic (PLEG): container finished" podID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerID="5cfe1dc3c9aeb16b1405aa53eadb3e8740ea1900ed1718290d4ece8172741e4d" exitCode=0 Jan 29 09:26:06 crc kubenswrapper[4771]: I0129 09:26:06.098328 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-m957m" event={"ID":"bcbc418c-ff34-4040-bfc9-2c8111568cd0","Type":"ContainerDied","Data":"5cfe1dc3c9aeb16b1405aa53eadb3e8740ea1900ed1718290d4ece8172741e4d"} Jan 29 09:26:06 crc kubenswrapper[4771]: I0129 09:26:06.114565 4771 generic.go:334] "Generic (PLEG): container finished" podID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerID="5382debfb05be994eef710db25b72b425bb6c8c470e9baaffdcb055b98e51aaa" exitCode=0 Jan 29 09:26:06 crc kubenswrapper[4771]: I0129 09:26:06.114629 4771 generic.go:334] "Generic (PLEG): container finished" podID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerID="26f886cd01b2fbb69f90a06a13ee7c9b278a0066e0aee9f68713231565fa4828" exitCode=143 Jan 29 09:26:06 crc kubenswrapper[4771]: I0129 09:26:06.114811 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" event={"ID":"2a874d0d-803a-4aaf-85b1-a2584fc5a751","Type":"ContainerDied","Data":"5382debfb05be994eef710db25b72b425bb6c8c470e9baaffdcb055b98e51aaa"} Jan 29 09:26:06 crc kubenswrapper[4771]: I0129 09:26:06.114864 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" event={"ID":"2a874d0d-803a-4aaf-85b1-a2584fc5a751","Type":"ContainerDied","Data":"26f886cd01b2fbb69f90a06a13ee7c9b278a0066e0aee9f68713231565fa4828"} Jan 29 09:26:06 crc kubenswrapper[4771]: I0129 09:26:06.121110 4771 generic.go:334] "Generic (PLEG): container finished" podID="29ab9c1d-3798-4151-bf0b-63227f0e45a4" containerID="ee279fb77130501382f07268d88b8252be9bf08b614342e0da6c83af22e3313b" exitCode=0 Jan 29 09:26:06 crc kubenswrapper[4771]: I0129 09:26:06.121163 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g2fcd" event={"ID":"29ab9c1d-3798-4151-bf0b-63227f0e45a4","Type":"ContainerDied","Data":"ee279fb77130501382f07268d88b8252be9bf08b614342e0da6c83af22e3313b"} Jan 29 09:26:07 crc kubenswrapper[4771]: I0129 09:26:07.493158 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:26:07 crc kubenswrapper[4771]: I0129 09:26:07.690088 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.578581 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.704557 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-config-data\") pod \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.704807 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-scripts\") pod \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.704870 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ab9c1d-3798-4151-bf0b-63227f0e45a4-etc-machine-id\") pod \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.704932 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-db-sync-config-data\") pod \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.705045 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-combined-ca-bundle\") pod \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.705088 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppsn5\" (UniqueName: \"kubernetes.io/projected/29ab9c1d-3798-4151-bf0b-63227f0e45a4-kube-api-access-ppsn5\") pod \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\" (UID: \"29ab9c1d-3798-4151-bf0b-63227f0e45a4\") " Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.705989 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29ab9c1d-3798-4151-bf0b-63227f0e45a4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29ab9c1d-3798-4151-bf0b-63227f0e45a4" (UID: "29ab9c1d-3798-4151-bf0b-63227f0e45a4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.725609 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "29ab9c1d-3798-4151-bf0b-63227f0e45a4" (UID: "29ab9c1d-3798-4151-bf0b-63227f0e45a4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.726939 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-scripts" (OuterVolumeSpecName: "scripts") pod "29ab9c1d-3798-4151-bf0b-63227f0e45a4" (UID: "29ab9c1d-3798-4151-bf0b-63227f0e45a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.768846 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ab9c1d-3798-4151-bf0b-63227f0e45a4" (UID: "29ab9c1d-3798-4151-bf0b-63227f0e45a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.771064 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ab9c1d-3798-4151-bf0b-63227f0e45a4-kube-api-access-ppsn5" (OuterVolumeSpecName: "kube-api-access-ppsn5") pod "29ab9c1d-3798-4151-bf0b-63227f0e45a4" (UID: "29ab9c1d-3798-4151-bf0b-63227f0e45a4"). InnerVolumeSpecName "kube-api-access-ppsn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.811591 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.811649 4771 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ab9c1d-3798-4151-bf0b-63227f0e45a4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.811665 4771 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.811682 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.811699 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppsn5\" (UniqueName: \"kubernetes.io/projected/29ab9c1d-3798-4151-bf0b-63227f0e45a4-kube-api-access-ppsn5\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.887017 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-config-data" (OuterVolumeSpecName: "config-data") pod "29ab9c1d-3798-4151-bf0b-63227f0e45a4" (UID: "29ab9c1d-3798-4151-bf0b-63227f0e45a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:09 crc kubenswrapper[4771]: I0129 09:26:09.914274 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ab9c1d-3798-4151-bf0b-63227f0e45a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.185182 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g2fcd" event={"ID":"29ab9c1d-3798-4151-bf0b-63227f0e45a4","Type":"ContainerDied","Data":"6ea5a32a03f8c453e896fd47e7173200e7c9cac4f9b0a13e245ef5530681beb6"} Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.185268 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ea5a32a03f8c453e896fd47e7173200e7c9cac4f9b0a13e245ef5530681beb6" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.185405 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g2fcd" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.902557 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:10 crc kubenswrapper[4771]: E0129 09:26:10.903100 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ab9c1d-3798-4151-bf0b-63227f0e45a4" containerName="cinder-db-sync" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.903113 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ab9c1d-3798-4151-bf0b-63227f0e45a4" containerName="cinder-db-sync" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.903346 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ab9c1d-3798-4151-bf0b-63227f0e45a4" containerName="cinder-db-sync" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.904528 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.916239 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5rjkk" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.916543 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.916657 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.916784 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.917069 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:26:10 crc kubenswrapper[4771]: I0129 09:26:10.921968 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.018483 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-x8pt8"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.023964 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.048876 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-x8pt8"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.059227 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4cd08fba-2e76-4705-b4c8-8adc6266f56d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.071869 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.071937 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.072185 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.072237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-scripts\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.072283 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7d54\" (UniqueName: \"kubernetes.io/projected/4cd08fba-2e76-4705-b4c8-8adc6266f56d-kube-api-access-v7d54\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.131897 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176143 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bzf\" (UniqueName: \"kubernetes.io/projected/510d122e-e0e6-4bfd-98d6-1325ecb314bd-kube-api-access-s9bzf\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176222 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176263 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-scripts\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176312 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d54\" (UniqueName: \"kubernetes.io/projected/4cd08fba-2e76-4705-b4c8-8adc6266f56d-kube-api-access-v7d54\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176359 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176412 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4cd08fba-2e76-4705-b4c8-8adc6266f56d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176450 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176473 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176504 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176528 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.176590 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-config\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.177892 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4cd08fba-2e76-4705-b4c8-8adc6266f56d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.186815 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.190744 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.211290 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.219387 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-scripts\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.220631 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.222920 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.232081 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.241902 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7d54\" (UniqueName: \"kubernetes.io/projected/4cd08fba-2e76-4705-b4c8-8adc6266f56d-kube-api-access-v7d54\") pod \"cinder-scheduler-0\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.253635 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.266899 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.281317 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.281868 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.281963 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.282138 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-config\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.282416 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9bzf\" (UniqueName: \"kubernetes.io/projected/510d122e-e0e6-4bfd-98d6-1325ecb314bd-kube-api-access-s9bzf\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.283312 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.284431 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-config\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.286310 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.306144 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7484874686-s4fjd" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.312235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.322786 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9bzf\" (UniqueName: \"kubernetes.io/projected/510d122e-e0e6-4bfd-98d6-1325ecb314bd-kube-api-access-s9bzf\") pod \"dnsmasq-dns-77c9c856fc-x8pt8\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.361097 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.384192 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74887f27-8838-4d19-b458-dd0134812228-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.384344 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-scripts\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.384366 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.384399 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.384462 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjwz\" (UniqueName: \"kubernetes.io/projected/74887f27-8838-4d19-b458-dd0134812228-kube-api-access-rqjwz\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.384483 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74887f27-8838-4d19-b458-dd0134812228-logs\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.384505 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data-custom\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.423568 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6dfd54df77-pk6bs"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.426683 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6dfd54df77-pk6bs" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-api" containerID="cri-o://a2f987ac2d3da99904f6a154d5f08a3389cc332fb9da905bc9eb9cc15e139f31" gracePeriod=30 Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.427455 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6dfd54df77-pk6bs" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-httpd" containerID="cri-o://302de8bf57f36136b8cbeee2e5782db40b29d879f6ec788edadb6d8c9a7b2466" gracePeriod=30 Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.442168 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74b56b4686-z8692"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.442612 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74b56b4686-z8692" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api-log" containerID="cri-o://f4ad55f1edc90667256abedbc27ef80ca1081e5d9f8d8273247c8bf1698e793f" gracePeriod=30 Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.443063 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74b56b4686-z8692" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api" containerID="cri-o://ce930545bd79125f43c2afe811a410c812d3c90988c7d7a2818cf2a59e3aeef3" gracePeriod=30 Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.451737 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74b56b4686-z8692" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.475427 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78b5bf9d6f-2454r"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.477375 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.485899 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74887f27-8838-4d19-b458-dd0134812228-logs\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.485948 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data-custom\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.485996 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74887f27-8838-4d19-b458-dd0134812228-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.486102 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-scripts\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.486118 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.486147 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.486207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqjwz\" (UniqueName: \"kubernetes.io/projected/74887f27-8838-4d19-b458-dd0134812228-kube-api-access-rqjwz\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.489329 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78b5bf9d6f-2454r"] Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.490689 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74887f27-8838-4d19-b458-dd0134812228-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.495373 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74887f27-8838-4d19-b458-dd0134812228-logs\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.520266 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.522458 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.522534 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqjwz\" (UniqueName: \"kubernetes.io/projected/74887f27-8838-4d19-b458-dd0134812228-kube-api-access-rqjwz\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.532825 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data-custom\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.541833 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-scripts\") pod \"cinder-api-0\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.590237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-ovndb-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.590403 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-combined-ca-bundle\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.590567 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2kf\" (UniqueName: \"kubernetes.io/projected/814be124-8d28-4fa9-b792-9d6561d105f9-kube-api-access-xs2kf\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.590626 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-public-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.590732 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-httpd-config\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.590843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-internal-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.590912 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-config\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.693992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-public-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.694061 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-httpd-config\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.694145 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-internal-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.694193 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-config\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.694233 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-ovndb-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.694287 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-combined-ca-bundle\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.694370 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs2kf\" (UniqueName: \"kubernetes.io/projected/814be124-8d28-4fa9-b792-9d6561d105f9-kube-api-access-xs2kf\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.700617 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-public-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.701422 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-config\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.701619 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.712734 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-internal-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.716447 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-ovndb-tls-certs\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.732632 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-combined-ca-bundle\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.737314 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/814be124-8d28-4fa9-b792-9d6561d105f9-httpd-config\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.739038 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs2kf\" (UniqueName: \"kubernetes.io/projected/814be124-8d28-4fa9-b792-9d6561d105f9-kube-api-access-xs2kf\") pod \"neutron-78b5bf9d6f-2454r\" (UID: \"814be124-8d28-4fa9-b792-9d6561d105f9\") " pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.815338 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6dfd54df77-pk6bs" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": read tcp 10.217.0.2:39612->10.217.0.154:9696: read: connection reset by peer" Jan 29 09:26:11 crc kubenswrapper[4771]: I0129 09:26:11.902915 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:12 crc kubenswrapper[4771]: I0129 09:26:12.249939 4771 generic.go:334] "Generic (PLEG): container finished" podID="66c0206a-2567-49ef-b02a-016a97c6e057" containerID="f4ad55f1edc90667256abedbc27ef80ca1081e5d9f8d8273247c8bf1698e793f" exitCode=143 Jan 29 09:26:12 crc kubenswrapper[4771]: I0129 09:26:12.250036 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b56b4686-z8692" event={"ID":"66c0206a-2567-49ef-b02a-016a97c6e057","Type":"ContainerDied","Data":"f4ad55f1edc90667256abedbc27ef80ca1081e5d9f8d8273247c8bf1698e793f"} Jan 29 09:26:12 crc kubenswrapper[4771]: I0129 09:26:12.255664 4771 generic.go:334] "Generic (PLEG): container finished" podID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerID="302de8bf57f36136b8cbeee2e5782db40b29d879f6ec788edadb6d8c9a7b2466" exitCode=0 Jan 29 09:26:12 crc kubenswrapper[4771]: I0129 09:26:12.255753 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfd54df77-pk6bs" event={"ID":"f6f77513-ae83-4d90-9959-732cd517d2eb","Type":"ContainerDied","Data":"302de8bf57f36136b8cbeee2e5782db40b29d879f6ec788edadb6d8c9a7b2466"} Jan 29 09:26:12 crc kubenswrapper[4771]: I0129 09:26:12.940482 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:26:12 crc kubenswrapper[4771]: I0129 09:26:12.941406 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063200 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2gz7\" (UniqueName: \"kubernetes.io/projected/bcbc418c-ff34-4040-bfc9-2c8111568cd0-kube-api-access-w2gz7\") pod \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063298 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data-custom\") pod \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063340 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a874d0d-803a-4aaf-85b1-a2584fc5a751-logs\") pod \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063384 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-dns-svc\") pod \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063412 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-sb\") pod \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063439 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-nb\") pod \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063477 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmxm2\" (UniqueName: \"kubernetes.io/projected/2a874d0d-803a-4aaf-85b1-a2584fc5a751-kube-api-access-xmxm2\") pod \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063523 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-config\") pod \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\" (UID: \"bcbc418c-ff34-4040-bfc9-2c8111568cd0\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063725 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-combined-ca-bundle\") pod \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.063766 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data\") pod \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\" (UID: \"2a874d0d-803a-4aaf-85b1-a2584fc5a751\") " Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.074524 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a874d0d-803a-4aaf-85b1-a2584fc5a751-logs" (OuterVolumeSpecName: "logs") pod "2a874d0d-803a-4aaf-85b1-a2584fc5a751" (UID: "2a874d0d-803a-4aaf-85b1-a2584fc5a751"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.111658 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a874d0d-803a-4aaf-85b1-a2584fc5a751-kube-api-access-xmxm2" (OuterVolumeSpecName: "kube-api-access-xmxm2") pod "2a874d0d-803a-4aaf-85b1-a2584fc5a751" (UID: "2a874d0d-803a-4aaf-85b1-a2584fc5a751"). InnerVolumeSpecName "kube-api-access-xmxm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.128262 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a874d0d-803a-4aaf-85b1-a2584fc5a751" (UID: "2a874d0d-803a-4aaf-85b1-a2584fc5a751"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.148152 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcbc418c-ff34-4040-bfc9-2c8111568cd0-kube-api-access-w2gz7" (OuterVolumeSpecName: "kube-api-access-w2gz7") pod "bcbc418c-ff34-4040-bfc9-2c8111568cd0" (UID: "bcbc418c-ff34-4040-bfc9-2c8111568cd0"). InnerVolumeSpecName "kube-api-access-w2gz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.166497 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a874d0d-803a-4aaf-85b1-a2584fc5a751-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.166550 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmxm2\" (UniqueName: \"kubernetes.io/projected/2a874d0d-803a-4aaf-85b1-a2584fc5a751-kube-api-access-xmxm2\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.166564 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2gz7\" (UniqueName: \"kubernetes.io/projected/bcbc418c-ff34-4040-bfc9-2c8111568cd0-kube-api-access-w2gz7\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.166576 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.249073 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.338968 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-m957m" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.339519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-m957m" event={"ID":"bcbc418c-ff34-4040-bfc9-2c8111568cd0","Type":"ContainerDied","Data":"1853ff60bb913a1fa21d2f7bd580aff674fba807ac45b392c296ec8064e22dbf"} Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.339609 4771 scope.go:117] "RemoveContainer" containerID="5cfe1dc3c9aeb16b1405aa53eadb3e8740ea1900ed1718290d4ece8172741e4d" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.342796 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" event={"ID":"2a874d0d-803a-4aaf-85b1-a2584fc5a751","Type":"ContainerDied","Data":"4a79bbc89b96d8ab5a487c7e39dcda491fc417bf8b1e5ddbf7488e14da4bf3e0"} Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.342986 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cf746d778-4gpwb" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.406096 4771 scope.go:117] "RemoveContainer" containerID="a183dec526fb5112857524996951924548b155452a711f8e06151b0aaad6fd77" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.421762 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.505661 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6dfd54df77-pk6bs" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.529054 4771 scope.go:117] "RemoveContainer" containerID="5382debfb05be994eef710db25b72b425bb6c8c470e9baaffdcb055b98e51aaa" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.533980 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a874d0d-803a-4aaf-85b1-a2584fc5a751" (UID: "2a874d0d-803a-4aaf-85b1-a2584fc5a751"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.573348 4771 scope.go:117] "RemoveContainer" containerID="26f886cd01b2fbb69f90a06a13ee7c9b278a0066e0aee9f68713231565fa4828" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.581193 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.654589 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bcbc418c-ff34-4040-bfc9-2c8111568cd0" (UID: "bcbc418c-ff34-4040-bfc9-2c8111568cd0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.684388 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.740374 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bcbc418c-ff34-4040-bfc9-2c8111568cd0" (UID: "bcbc418c-ff34-4040-bfc9-2c8111568cd0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.765942 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-x8pt8"] Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.786513 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.791391 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bcbc418c-ff34-4040-bfc9-2c8111568cd0" (UID: "bcbc418c-ff34-4040-bfc9-2c8111568cd0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.793436 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:13 crc kubenswrapper[4771]: W0129 09:26:13.797977 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74887f27_8838_4d19_b458_dd0134812228.slice/crio-089fc971c1e5db03a3a03cfa74a80e02069ca0067829c395926f7dd5c8413e8e WatchSource:0}: Error finding container 089fc971c1e5db03a3a03cfa74a80e02069ca0067829c395926f7dd5c8413e8e: Status 404 returned error can't find the container with id 089fc971c1e5db03a3a03cfa74a80e02069ca0067829c395926f7dd5c8413e8e Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.798117 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data" (OuterVolumeSpecName: "config-data") pod "2a874d0d-803a-4aaf-85b1-a2584fc5a751" (UID: "2a874d0d-803a-4aaf-85b1-a2584fc5a751"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.810241 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-config" (OuterVolumeSpecName: "config") pod "bcbc418c-ff34-4040-bfc9-2c8111568cd0" (UID: "bcbc418c-ff34-4040-bfc9-2c8111568cd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:13 crc kubenswrapper[4771]: E0129 09:26:13.820295 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.892281 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.892337 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbc418c-ff34-4040-bfc9-2c8111568cd0-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:13 crc kubenswrapper[4771]: I0129 09:26:13.892350 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a874d0d-803a-4aaf-85b1-a2584fc5a751-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.100589 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7cf746d778-4gpwb"] Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.129790 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7cf746d778-4gpwb"] Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.146978 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-m957m"] Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.199772 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-m957m"] Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.211912 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78b5bf9d6f-2454r"] Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.270895 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.270946 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.399612 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe","Type":"ContainerStarted","Data":"10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836"} Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.399797 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="ceilometer-notification-agent" containerID="cri-o://73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07" gracePeriod=30 Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.399879 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.400245 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="proxy-httpd" containerID="cri-o://10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836" gracePeriod=30 Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.400292 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="sg-core" containerID="cri-o://075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5" gracePeriod=30 Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.405316 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5bf9d6f-2454r" event={"ID":"814be124-8d28-4fa9-b792-9d6561d105f9","Type":"ContainerStarted","Data":"6ec5cda02f93c0c6ac3cfcaa8a625cae0d39da4f071f756c94f39c19fff1cf84"} Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.409310 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74887f27-8838-4d19-b458-dd0134812228","Type":"ContainerStarted","Data":"089fc971c1e5db03a3a03cfa74a80e02069ca0067829c395926f7dd5c8413e8e"} Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.412585 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" event={"ID":"510d122e-e0e6-4bfd-98d6-1325ecb314bd","Type":"ContainerStarted","Data":"e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e"} Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.412616 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" event={"ID":"510d122e-e0e6-4bfd-98d6-1325ecb314bd","Type":"ContainerStarted","Data":"9907e9fa1e1ce2e49cba3c11225d33db277d557f74930fedf6075fa816588cad"} Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.421992 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4cd08fba-2e76-4705-b4c8-8adc6266f56d","Type":"ContainerStarted","Data":"e36186061f053319b573cc5b506a5b29ffeb8def5d0ce33de93d83791413d396"} Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.460103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e6ce7b26-bcc5-4306-ab2c-5691cceeb18f","Type":"ContainerStarted","Data":"177a556ab553495c3ecac6d6448d438cf46804303c52ac8d0a16295ba24d2367"} Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.480321 4771 generic.go:334] "Generic (PLEG): container finished" podID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerID="a2f987ac2d3da99904f6a154d5f08a3389cc332fb9da905bc9eb9cc15e139f31" exitCode=0 Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.480928 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfd54df77-pk6bs" event={"ID":"f6f77513-ae83-4d90-9959-732cd517d2eb","Type":"ContainerDied","Data":"a2f987ac2d3da99904f6a154d5f08a3389cc332fb9da905bc9eb9cc15e139f31"} Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.528763 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=104.780913671 podStartE2EDuration="2m6.528739842s" podCreationTimestamp="2026-01-29 09:24:08 +0000 UTC" firstStartedPulling="2026-01-29 09:25:39.240751731 +0000 UTC m=+1159.363591958" lastFinishedPulling="2026-01-29 09:26:00.988577912 +0000 UTC m=+1181.111418129" observedRunningTime="2026-01-29 09:26:14.519964612 +0000 UTC m=+1194.642804859" watchObservedRunningTime="2026-01-29 09:26:14.528739842 +0000 UTC m=+1194.651580069" Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.893299 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" path="/var/lib/kubelet/pods/2a874d0d-803a-4aaf-85b1-a2584fc5a751/volumes" Jan 29 09:26:14 crc kubenswrapper[4771]: I0129 09:26:14.901651 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" path="/var/lib/kubelet/pods/bcbc418c-ff34-4040-bfc9-2c8111568cd0/volumes" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.076693 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-x8pt8"] Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.124330 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-756b74b74c-jfsqx"] Jan 29 09:26:15 crc kubenswrapper[4771]: E0129 09:26:15.124773 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerName="dnsmasq-dns" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.124794 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerName="dnsmasq-dns" Jan 29 09:26:15 crc kubenswrapper[4771]: E0129 09:26:15.124813 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerName="init" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.124822 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerName="init" Jan 29 09:26:15 crc kubenswrapper[4771]: E0129 09:26:15.124839 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerName="barbican-keystone-listener" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.124845 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerName="barbican-keystone-listener" Jan 29 09:26:15 crc kubenswrapper[4771]: E0129 09:26:15.124857 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerName="barbican-keystone-listener-log" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.124863 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerName="barbican-keystone-listener-log" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.125049 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerName="barbican-keystone-listener-log" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.125080 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerName="dnsmasq-dns" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.125090 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a874d0d-803a-4aaf-85b1-a2584fc5a751" containerName="barbican-keystone-listener" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.126166 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.129585 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.153691 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756b74b74c-jfsqx"] Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.243472 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-sb\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.243545 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-swift-storage-0\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.243601 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-config\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.243626 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-nb\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.243651 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-svc\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.243685 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4shsv\" (UniqueName: \"kubernetes.io/projected/4a60492e-2744-495e-ac7b-d6ff1d970385-kube-api-access-4shsv\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.326659 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.346605 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-sb\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.346682 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-swift-storage-0\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.346758 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-config\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.346780 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-nb\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.346805 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-svc\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.346838 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4shsv\" (UniqueName: \"kubernetes.io/projected/4a60492e-2744-495e-ac7b-d6ff1d970385-kube-api-access-4shsv\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.348464 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-swift-storage-0\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.348810 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-config\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.349162 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-sb\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.349361 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-svc\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.349377 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-nb\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.369572 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4shsv\" (UniqueName: \"kubernetes.io/projected/4a60492e-2744-495e-ac7b-d6ff1d970385-kube-api-access-4shsv\") pod \"dnsmasq-dns-756b74b74c-jfsqx\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.372106 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.406776 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b6c948c7-m957m" podUID="bcbc418c-ff34-4040-bfc9-2c8111568cd0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: i/o timeout" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.453451 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-config\") pod \"f6f77513-ae83-4d90-9959-732cd517d2eb\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.453624 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn9kv\" (UniqueName: \"kubernetes.io/projected/f6f77513-ae83-4d90-9959-732cd517d2eb-kube-api-access-hn9kv\") pod \"f6f77513-ae83-4d90-9959-732cd517d2eb\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.453748 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-combined-ca-bundle\") pod \"f6f77513-ae83-4d90-9959-732cd517d2eb\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.453786 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-httpd-config\") pod \"f6f77513-ae83-4d90-9959-732cd517d2eb\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.453828 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-public-tls-certs\") pod \"f6f77513-ae83-4d90-9959-732cd517d2eb\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.455042 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-ovndb-tls-certs\") pod \"f6f77513-ae83-4d90-9959-732cd517d2eb\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.455180 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-internal-tls-certs\") pod \"f6f77513-ae83-4d90-9959-732cd517d2eb\" (UID: \"f6f77513-ae83-4d90-9959-732cd517d2eb\") " Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.490164 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f6f77513-ae83-4d90-9959-732cd517d2eb" (UID: "f6f77513-ae83-4d90-9959-732cd517d2eb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.490278 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f77513-ae83-4d90-9959-732cd517d2eb-kube-api-access-hn9kv" (OuterVolumeSpecName: "kube-api-access-hn9kv") pod "f6f77513-ae83-4d90-9959-732cd517d2eb" (UID: "f6f77513-ae83-4d90-9959-732cd517d2eb"). InnerVolumeSpecName "kube-api-access-hn9kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.508777 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.559381 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.559419 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn9kv\" (UniqueName: \"kubernetes.io/projected/f6f77513-ae83-4d90-9959-732cd517d2eb-kube-api-access-hn9kv\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.592946 4771 generic.go:334] "Generic (PLEG): container finished" podID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" containerID="e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e" exitCode=0 Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.593066 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" event={"ID":"510d122e-e0e6-4bfd-98d6-1325ecb314bd","Type":"ContainerDied","Data":"e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e"} Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.593114 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" event={"ID":"510d122e-e0e6-4bfd-98d6-1325ecb314bd","Type":"ContainerStarted","Data":"52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7"} Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.593135 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" podUID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" containerName="dnsmasq-dns" containerID="cri-o://52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7" gracePeriod=10 Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.593608 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.603840 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfd54df77-pk6bs" event={"ID":"f6f77513-ae83-4d90-9959-732cd517d2eb","Type":"ContainerDied","Data":"ee0a6497953fc608baf95a5102519bb3565d90e10354b2edcebe697004414bc1"} Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.603917 4771 scope.go:117] "RemoveContainer" containerID="302de8bf57f36136b8cbeee2e5782db40b29d879f6ec788edadb6d8c9a7b2466" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.604089 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dfd54df77-pk6bs" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.612169 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6f77513-ae83-4d90-9959-732cd517d2eb" (UID: "f6f77513-ae83-4d90-9959-732cd517d2eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.640768 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f6f77513-ae83-4d90-9959-732cd517d2eb" (UID: "f6f77513-ae83-4d90-9959-732cd517d2eb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.649287 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" podStartSLOduration=5.649261129 podStartE2EDuration="5.649261129s" podCreationTimestamp="2026-01-29 09:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:15.625607523 +0000 UTC m=+1195.748447770" watchObservedRunningTime="2026-01-29 09:26:15.649261129 +0000 UTC m=+1195.772101376" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.664245 4771 generic.go:334] "Generic (PLEG): container finished" podID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerID="10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836" exitCode=0 Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.664323 4771 generic.go:334] "Generic (PLEG): container finished" podID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerID="075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5" exitCode=2 Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.664368 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe","Type":"ContainerDied","Data":"10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836"} Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.664397 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe","Type":"ContainerDied","Data":"075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5"} Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.668616 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.668656 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.674429 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5bf9d6f-2454r" event={"ID":"814be124-8d28-4fa9-b792-9d6561d105f9","Type":"ContainerStarted","Data":"4c659d0ed81a9a741e1e95e5e6c5229a4203d996db6eada513225a8c350c73e1"} Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.687572 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74887f27-8838-4d19-b458-dd0134812228","Type":"ContainerStarted","Data":"805baec997d193d1a69880cb2f55ce468a4220eb8e8f87b5d47df294c65a5984"} Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.729008 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-config" (OuterVolumeSpecName: "config") pod "f6f77513-ae83-4d90-9959-732cd517d2eb" (UID: "f6f77513-ae83-4d90-9959-732cd517d2eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.747778 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.758133 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f6f77513-ae83-4d90-9959-732cd517d2eb" (UID: "f6f77513-ae83-4d90-9959-732cd517d2eb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.773574 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.773609 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.798932 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f6f77513-ae83-4d90-9959-732cd517d2eb" (UID: "f6f77513-ae83-4d90-9959-732cd517d2eb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:15 crc kubenswrapper[4771]: I0129 09:26:15.876585 4771 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f77513-ae83-4d90-9959-732cd517d2eb-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.095913 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6dfd54df77-pk6bs"] Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.132219 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6dfd54df77-pk6bs"] Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.139749 4771 scope.go:117] "RemoveContainer" containerID="a2f987ac2d3da99904f6a154d5f08a3389cc332fb9da905bc9eb9cc15e139f31" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.265530 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756b74b74c-jfsqx"] Jan 29 09:26:16 crc kubenswrapper[4771]: W0129 09:26:16.289241 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a60492e_2744_495e_ac7b_d6ff1d970385.slice/crio-7f81c7f5ef331428a90b844872e19297fe7a6b355a49883aab0bdf0a5b763eb0 WatchSource:0}: Error finding container 7f81c7f5ef331428a90b844872e19297fe7a6b355a49883aab0bdf0a5b763eb0: Status 404 returned error can't find the container with id 7f81c7f5ef331428a90b844872e19297fe7a6b355a49883aab0bdf0a5b763eb0 Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.464201 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.494795 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-sb\") pod \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.494829 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-config\") pod \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.494912 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-dns-svc\") pod \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.495068 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-nb\") pod \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.495132 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9bzf\" (UniqueName: \"kubernetes.io/projected/510d122e-e0e6-4bfd-98d6-1325ecb314bd-kube-api-access-s9bzf\") pod \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.519066 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510d122e-e0e6-4bfd-98d6-1325ecb314bd-kube-api-access-s9bzf" (OuterVolumeSpecName: "kube-api-access-s9bzf") pod "510d122e-e0e6-4bfd-98d6-1325ecb314bd" (UID: "510d122e-e0e6-4bfd-98d6-1325ecb314bd"). InnerVolumeSpecName "kube-api-access-s9bzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.599585 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9bzf\" (UniqueName: \"kubernetes.io/projected/510d122e-e0e6-4bfd-98d6-1325ecb314bd-kube-api-access-s9bzf\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.603516 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.702384 4771 generic.go:334] "Generic (PLEG): container finished" podID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerID="73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07" exitCode=0 Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.702527 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.703288 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe","Type":"ContainerDied","Data":"73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07"} Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.703337 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe","Type":"ContainerDied","Data":"c58f60ef5e1e947a87cc94291df2411997e935a024d84fd483b011a92fdc5e13"} Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.703369 4771 scope.go:117] "RemoveContainer" containerID="10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.714825 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5bf9d6f-2454r" event={"ID":"814be124-8d28-4fa9-b792-9d6561d105f9","Type":"ContainerStarted","Data":"cd1fbb175b81cf453f18dcc3faf3abcc774f95f4e14d80281eb0c25cdb8147d9"} Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.715171 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.727972 4771 generic.go:334] "Generic (PLEG): container finished" podID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" containerID="52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7" exitCode=0 Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.728057 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" event={"ID":"510d122e-e0e6-4bfd-98d6-1325ecb314bd","Type":"ContainerDied","Data":"52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7"} Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.728087 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" event={"ID":"510d122e-e0e6-4bfd-98d6-1325ecb314bd","Type":"ContainerDied","Data":"9907e9fa1e1ce2e49cba3c11225d33db277d557f74930fedf6075fa816588cad"} Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.728151 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-x8pt8" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.731373 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" event={"ID":"4a60492e-2744-495e-ac7b-d6ff1d970385","Type":"ContainerStarted","Data":"7f81c7f5ef331428a90b844872e19297fe7a6b355a49883aab0bdf0a5b763eb0"} Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.770944 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78b5bf9d6f-2454r" podStartSLOduration=5.770912248 podStartE2EDuration="5.770912248s" podCreationTimestamp="2026-01-29 09:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:16.741962787 +0000 UTC m=+1196.864803014" watchObservedRunningTime="2026-01-29 09:26:16.770912248 +0000 UTC m=+1196.893752475" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.802666 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-config-data\") pod \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.802747 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-sg-core-conf-yaml\") pod \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.802834 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-combined-ca-bundle\") pod \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.802884 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-log-httpd\") pod \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.802920 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfnpt\" (UniqueName: \"kubernetes.io/projected/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-kube-api-access-cfnpt\") pod \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.803089 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-scripts\") pod \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.803111 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-run-httpd\") pod \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\" (UID: \"ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe\") " Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.803901 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" (UID: "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.805097 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" (UID: "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.843460 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "510d122e-e0e6-4bfd-98d6-1325ecb314bd" (UID: "510d122e-e0e6-4bfd-98d6-1325ecb314bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.844241 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-kube-api-access-cfnpt" (OuterVolumeSpecName: "kube-api-access-cfnpt") pod "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" (UID: "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe"). InnerVolumeSpecName "kube-api-access-cfnpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.844907 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-scripts" (OuterVolumeSpecName: "scripts") pod "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" (UID: "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.858392 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "510d122e-e0e6-4bfd-98d6-1325ecb314bd" (UID: "510d122e-e0e6-4bfd-98d6-1325ecb314bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.877675 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" (UID: "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.892578 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "510d122e-e0e6-4bfd-98d6-1325ecb314bd" (UID: "510d122e-e0e6-4bfd-98d6-1325ecb314bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.907763 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.907809 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.907825 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.907835 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.907848 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.907857 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.907867 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfnpt\" (UniqueName: \"kubernetes.io/projected/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-kube-api-access-cfnpt\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.907882 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.916817 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" path="/var/lib/kubelet/pods/f6f77513-ae83-4d90-9959-732cd517d2eb/volumes" Jan 29 09:26:16 crc kubenswrapper[4771]: I0129 09:26:16.992381 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" (UID: "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.009790 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-config" (OuterVolumeSpecName: "config") pod "510d122e-e0e6-4bfd-98d6-1325ecb314bd" (UID: "510d122e-e0e6-4bfd-98d6-1325ecb314bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.010457 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-config\") pod \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\" (UID: \"510d122e-e0e6-4bfd-98d6-1325ecb314bd\") " Jan 29 09:26:17 crc kubenswrapper[4771]: W0129 09:26:17.010589 4771 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/510d122e-e0e6-4bfd-98d6-1325ecb314bd/volumes/kubernetes.io~configmap/config Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.010615 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-config" (OuterVolumeSpecName: "config") pod "510d122e-e0e6-4bfd-98d6-1325ecb314bd" (UID: "510d122e-e0e6-4bfd-98d6-1325ecb314bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.011471 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.011497 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510d122e-e0e6-4bfd-98d6-1325ecb314bd-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.081538 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-config-data" (OuterVolumeSpecName: "config-data") pod "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" (UID: "ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.085944 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74b56b4686-z8692" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:42832->10.217.0.162:9311: read: connection reset by peer" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.086316 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74b56b4686-z8692" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:42836->10.217.0.162:9311: read: connection reset by peer" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.116269 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.222598 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-x8pt8"] Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.237770 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-x8pt8"] Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.444772 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.479773 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.498133 4771 scope.go:117] "RemoveContainer" containerID="075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.521770 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.522318 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="proxy-httpd" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522332 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="proxy-httpd" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.522347 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" containerName="dnsmasq-dns" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522354 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" containerName="dnsmasq-dns" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.522367 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-api" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522373 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-api" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.522384 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="ceilometer-notification-agent" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522391 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="ceilometer-notification-agent" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.522430 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="sg-core" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522437 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="sg-core" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.522449 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-httpd" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522457 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-httpd" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.522468 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" containerName="init" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522474 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" containerName="init" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522672 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-httpd" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522688 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="proxy-httpd" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522718 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="ceilometer-notification-agent" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522730 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f77513-ae83-4d90-9959-732cd517d2eb" containerName="neutron-api" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522743 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" containerName="dnsmasq-dns" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.522756 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" containerName="sg-core" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.524488 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.539340 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.539510 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.560235 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.621087 4771 scope.go:117] "RemoveContainer" containerID="73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.650091 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.650361 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.650424 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-scripts\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.650478 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjbtl\" (UniqueName: \"kubernetes.io/projected/fa7186d1-6aeb-4270-9762-ffb01552509e-kube-api-access-cjbtl\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.650501 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-config-data\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.650543 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-run-httpd\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.650576 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-log-httpd\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.700786 4771 scope.go:117] "RemoveContainer" containerID="10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.702485 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836\": container with ID starting with 10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836 not found: ID does not exist" containerID="10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.702534 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836"} err="failed to get container status \"10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836\": rpc error: code = NotFound desc = could not find container \"10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836\": container with ID starting with 10c0804c9216d84087d50ef5e9d07dead5c30a5e1d02b27f1670867bcff44836 not found: ID does not exist" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.702573 4771 scope.go:117] "RemoveContainer" containerID="075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.702932 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5\": container with ID starting with 075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5 not found: ID does not exist" containerID="075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.702980 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5"} err="failed to get container status \"075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5\": rpc error: code = NotFound desc = could not find container \"075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5\": container with ID starting with 075396a5285c6b7895f4cd9fa51ba852d0d995ef252f9dbbafa662b63dede4b5 not found: ID does not exist" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.703015 4771 scope.go:117] "RemoveContainer" containerID="73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07" Jan 29 09:26:17 crc kubenswrapper[4771]: E0129 09:26:17.703351 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07\": container with ID starting with 73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07 not found: ID does not exist" containerID="73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.703376 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07"} err="failed to get container status \"73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07\": rpc error: code = NotFound desc = could not find container \"73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07\": container with ID starting with 73df9edd8aa6c0807f51f2298353270081c9dfda34e3b112c443fd9e1bc4ed07 not found: ID does not exist" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.703390 4771 scope.go:117] "RemoveContainer" containerID="52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.752786 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.753488 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-scripts\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.753617 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjbtl\" (UniqueName: \"kubernetes.io/projected/fa7186d1-6aeb-4270-9762-ffb01552509e-kube-api-access-cjbtl\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.753666 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-config-data\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.753874 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-run-httpd\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.753954 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-log-httpd\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.754043 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.756352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-run-httpd\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.765246 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-log-httpd\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.768264 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-scripts\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.776536 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.792221 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.796151 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjbtl\" (UniqueName: \"kubernetes.io/projected/fa7186d1-6aeb-4270-9762-ffb01552509e-kube-api-access-cjbtl\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.832679 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-config-data\") pod \"ceilometer-0\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.835355 4771 generic.go:334] "Generic (PLEG): container finished" podID="4a60492e-2744-495e-ac7b-d6ff1d970385" containerID="93a5e12847183f0c52406a028207ef573966caceed7feedd08270e55dd988293" exitCode=0 Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.835846 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" event={"ID":"4a60492e-2744-495e-ac7b-d6ff1d970385","Type":"ContainerDied","Data":"93a5e12847183f0c52406a028207ef573966caceed7feedd08270e55dd988293"} Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.863293 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4cd08fba-2e76-4705-b4c8-8adc6266f56d","Type":"ContainerStarted","Data":"1a4843cba8c89c39cb2f252de5fcc3c6837c8cc5a16eaf14d9a4009a4b8afa76"} Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.865155 4771 scope.go:117] "RemoveContainer" containerID="e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.885229 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74887f27-8838-4d19-b458-dd0134812228","Type":"ContainerStarted","Data":"3e2557f09c1a7adaa32ed62d19fdb33d3027f5be10e0aa520d775be2cded6f13"} Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.885484 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="74887f27-8838-4d19-b458-dd0134812228" containerName="cinder-api-log" containerID="cri-o://805baec997d193d1a69880cb2f55ce468a4220eb8e8f87b5d47df294c65a5984" gracePeriod=30 Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.885601 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.886027 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="74887f27-8838-4d19-b458-dd0134812228" containerName="cinder-api" containerID="cri-o://3e2557f09c1a7adaa32ed62d19fdb33d3027f5be10e0aa520d775be2cded6f13" gracePeriod=30 Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.960859 4771 generic.go:334] "Generic (PLEG): container finished" podID="66c0206a-2567-49ef-b02a-016a97c6e057" containerID="ce930545bd79125f43c2afe811a410c812d3c90988c7d7a2818cf2a59e3aeef3" exitCode=0 Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.961267 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b56b4686-z8692" event={"ID":"66c0206a-2567-49ef-b02a-016a97c6e057","Type":"ContainerDied","Data":"ce930545bd79125f43c2afe811a410c812d3c90988c7d7a2818cf2a59e3aeef3"} Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.976096 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:17 crc kubenswrapper[4771]: I0129 09:26:17.987871 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.987847201 podStartE2EDuration="6.987847201s" podCreationTimestamp="2026-01-29 09:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:17.936255742 +0000 UTC m=+1198.059095969" watchObservedRunningTime="2026-01-29 09:26:17.987847201 +0000 UTC m=+1198.110687428" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.019935 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.051328 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.115311 4771 scope.go:117] "RemoveContainer" containerID="52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7" Jan 29 09:26:18 crc kubenswrapper[4771]: E0129 09:26:18.117638 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7\": container with ID starting with 52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7 not found: ID does not exist" containerID="52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.117721 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7"} err="failed to get container status \"52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7\": rpc error: code = NotFound desc = could not find container \"52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7\": container with ID starting with 52f4dfe7fc122df9407a765840f0e626625384d2baac314da201923c6a0ec9d7 not found: ID does not exist" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.117750 4771 scope.go:117] "RemoveContainer" containerID="e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e" Jan 29 09:26:18 crc kubenswrapper[4771]: E0129 09:26:18.118119 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e\": container with ID starting with e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e not found: ID does not exist" containerID="e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.118158 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e"} err="failed to get container status \"e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e\": rpc error: code = NotFound desc = could not find container \"e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e\": container with ID starting with e469561aaa2a7250a26af22ec74b1687ab1d4a4c791ca760c986a309e0d3988e not found: ID does not exist" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.170936 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-combined-ca-bundle\") pod \"66c0206a-2567-49ef-b02a-016a97c6e057\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.171737 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data\") pod \"66c0206a-2567-49ef-b02a-016a97c6e057\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.171841 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66c0206a-2567-49ef-b02a-016a97c6e057-logs\") pod \"66c0206a-2567-49ef-b02a-016a97c6e057\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.171970 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4thx\" (UniqueName: \"kubernetes.io/projected/66c0206a-2567-49ef-b02a-016a97c6e057-kube-api-access-f4thx\") pod \"66c0206a-2567-49ef-b02a-016a97c6e057\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.172360 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66c0206a-2567-49ef-b02a-016a97c6e057-logs" (OuterVolumeSpecName: "logs") pod "66c0206a-2567-49ef-b02a-016a97c6e057" (UID: "66c0206a-2567-49ef-b02a-016a97c6e057"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.172870 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data-custom\") pod \"66c0206a-2567-49ef-b02a-016a97c6e057\" (UID: \"66c0206a-2567-49ef-b02a-016a97c6e057\") " Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.174233 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66c0206a-2567-49ef-b02a-016a97c6e057-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.177851 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "66c0206a-2567-49ef-b02a-016a97c6e057" (UID: "66c0206a-2567-49ef-b02a-016a97c6e057"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.178069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c0206a-2567-49ef-b02a-016a97c6e057-kube-api-access-f4thx" (OuterVolumeSpecName: "kube-api-access-f4thx") pod "66c0206a-2567-49ef-b02a-016a97c6e057" (UID: "66c0206a-2567-49ef-b02a-016a97c6e057"). InnerVolumeSpecName "kube-api-access-f4thx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.212839 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66c0206a-2567-49ef-b02a-016a97c6e057" (UID: "66c0206a-2567-49ef-b02a-016a97c6e057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.272269 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data" (OuterVolumeSpecName: "config-data") pod "66c0206a-2567-49ef-b02a-016a97c6e057" (UID: "66c0206a-2567-49ef-b02a-016a97c6e057"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.276159 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.276194 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4thx\" (UniqueName: \"kubernetes.io/projected/66c0206a-2567-49ef-b02a-016a97c6e057-kube-api-access-f4thx\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.276208 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.276218 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0206a-2567-49ef-b02a-016a97c6e057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.431387 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.446735 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.600911 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:18 crc kubenswrapper[4771]: W0129 09:26:18.619236 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa7186d1_6aeb_4270_9762_ffb01552509e.slice/crio-576a541490cd60fe3ca3a3b482d066134c376e4b459666e98e3717f7baff77b9 WatchSource:0}: Error finding container 576a541490cd60fe3ca3a3b482d066134c376e4b459666e98e3717f7baff77b9: Status 404 returned error can't find the container with id 576a541490cd60fe3ca3a3b482d066134c376e4b459666e98e3717f7baff77b9 Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.710530 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b66bb6fb-89w2j"] Jan 29 09:26:18 crc kubenswrapper[4771]: E0129 09:26:18.711585 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api-log" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.711606 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api-log" Jan 29 09:26:18 crc kubenswrapper[4771]: E0129 09:26:18.711643 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.711652 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.711889 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api-log" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.711914 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" containerName="barbican-api" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.717655 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.730927 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b66bb6fb-89w2j"] Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.900876 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-internal-tls-certs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.902237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-config-data\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.902531 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-public-tls-certs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.902749 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-scripts\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.902887 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-combined-ca-bundle\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.902974 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkm5\" (UniqueName: \"kubernetes.io/projected/93c318ae-5098-47e2-a09d-e67fe5124ed5-kube-api-access-hgkm5\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.903046 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93c318ae-5098-47e2-a09d-e67fe5124ed5-logs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.979120 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="510d122e-e0e6-4bfd-98d6-1325ecb314bd" path="/var/lib/kubelet/pods/510d122e-e0e6-4bfd-98d6-1325ecb314bd/volumes" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.981261 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe" path="/var/lib/kubelet/pods/ac1a3ad8-59de-4f46-83b0-6886b1ef4ffe/volumes" Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.993380 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" event={"ID":"4a60492e-2744-495e-ac7b-d6ff1d970385","Type":"ContainerStarted","Data":"835b2559987e6b803ee9cd560a77532092d946e3b39333ae2360b378efe0c87d"} Jan 29 09:26:18 crc kubenswrapper[4771]: I0129 09:26:18.994172 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.005777 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4cd08fba-2e76-4705-b4c8-8adc6266f56d","Type":"ContainerStarted","Data":"eef4ef30a3353403062f11e8744ad1b0c7bb15d1965e6ee9bb01c9a7590d6867"} Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.007088 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-scripts\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.007153 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-combined-ca-bundle\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.007180 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkm5\" (UniqueName: \"kubernetes.io/projected/93c318ae-5098-47e2-a09d-e67fe5124ed5-kube-api-access-hgkm5\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.007197 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93c318ae-5098-47e2-a09d-e67fe5124ed5-logs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.007337 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-internal-tls-certs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.007366 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-config-data\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.007409 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-public-tls-certs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.010556 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93c318ae-5098-47e2-a09d-e67fe5124ed5-logs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.025470 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerStarted","Data":"576a541490cd60fe3ca3a3b482d066134c376e4b459666e98e3717f7baff77b9"} Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.029348 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-public-tls-certs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.030258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-internal-tls-certs\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.030670 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-combined-ca-bundle\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.036563 4771 generic.go:334] "Generic (PLEG): container finished" podID="74887f27-8838-4d19-b458-dd0134812228" containerID="3e2557f09c1a7adaa32ed62d19fdb33d3027f5be10e0aa520d775be2cded6f13" exitCode=0 Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.036632 4771 generic.go:334] "Generic (PLEG): container finished" podID="74887f27-8838-4d19-b458-dd0134812228" containerID="805baec997d193d1a69880cb2f55ce468a4220eb8e8f87b5d47df294c65a5984" exitCode=143 Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.036742 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74887f27-8838-4d19-b458-dd0134812228","Type":"ContainerDied","Data":"3e2557f09c1a7adaa32ed62d19fdb33d3027f5be10e0aa520d775be2cded6f13"} Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.036827 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74887f27-8838-4d19-b458-dd0134812228","Type":"ContainerDied","Data":"805baec997d193d1a69880cb2f55ce468a4220eb8e8f87b5d47df294c65a5984"} Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.036845 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74887f27-8838-4d19-b458-dd0134812228","Type":"ContainerDied","Data":"089fc971c1e5db03a3a03cfa74a80e02069ca0067829c395926f7dd5c8413e8e"} Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.036857 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="089fc971c1e5db03a3a03cfa74a80e02069ca0067829c395926f7dd5c8413e8e" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.038501 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-config-data\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.030776 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c318ae-5098-47e2-a09d-e67fe5124ed5-scripts\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.041178 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b56b4686-z8692" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.041663 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkm5\" (UniqueName: \"kubernetes.io/projected/93c318ae-5098-47e2-a09d-e67fe5124ed5-kube-api-access-hgkm5\") pod \"placement-5b66bb6fb-89w2j\" (UID: \"93c318ae-5098-47e2-a09d-e67fe5124ed5\") " pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.041996 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b56b4686-z8692" event={"ID":"66c0206a-2567-49ef-b02a-016a97c6e057","Type":"ContainerDied","Data":"9ace6d004b679257753abc6075bf8380923e4e5e4a4775e979b974fd96efc713"} Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.042172 4771 scope.go:117] "RemoveContainer" containerID="ce930545bd79125f43c2afe811a410c812d3c90988c7d7a2818cf2a59e3aeef3" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.051028 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" podStartSLOduration=4.051006811 podStartE2EDuration="4.051006811s" podCreationTimestamp="2026-01-29 09:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:19.023225282 +0000 UTC m=+1199.146065529" watchObservedRunningTime="2026-01-29 09:26:19.051006811 +0000 UTC m=+1199.173847038" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.052488 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.090454 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.808033766 podStartE2EDuration="9.090425958s" podCreationTimestamp="2026-01-29 09:26:10 +0000 UTC" firstStartedPulling="2026-01-29 09:26:13.4904232 +0000 UTC m=+1193.613263427" lastFinishedPulling="2026-01-29 09:26:14.772815392 +0000 UTC m=+1194.895655619" observedRunningTime="2026-01-29 09:26:19.061425856 +0000 UTC m=+1199.184266083" watchObservedRunningTime="2026-01-29 09:26:19.090425958 +0000 UTC m=+1199.213266185" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.167799 4771 scope.go:117] "RemoveContainer" containerID="f4ad55f1edc90667256abedbc27ef80ca1081e5d9f8d8273247c8bf1698e793f" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.168859 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.217547 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74b56b4686-z8692"] Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.236724 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74b56b4686-z8692"] Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.315574 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74887f27-8838-4d19-b458-dd0134812228-logs\") pod \"74887f27-8838-4d19-b458-dd0134812228\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.315684 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74887f27-8838-4d19-b458-dd0134812228-etc-machine-id\") pod \"74887f27-8838-4d19-b458-dd0134812228\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.315827 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74887f27-8838-4d19-b458-dd0134812228-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "74887f27-8838-4d19-b458-dd0134812228" (UID: "74887f27-8838-4d19-b458-dd0134812228"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.315840 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqjwz\" (UniqueName: \"kubernetes.io/projected/74887f27-8838-4d19-b458-dd0134812228-kube-api-access-rqjwz\") pod \"74887f27-8838-4d19-b458-dd0134812228\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.315998 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-combined-ca-bundle\") pod \"74887f27-8838-4d19-b458-dd0134812228\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.316043 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data\") pod \"74887f27-8838-4d19-b458-dd0134812228\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.316157 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data-custom\") pod \"74887f27-8838-4d19-b458-dd0134812228\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.316199 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-scripts\") pod \"74887f27-8838-4d19-b458-dd0134812228\" (UID: \"74887f27-8838-4d19-b458-dd0134812228\") " Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.316790 4771 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74887f27-8838-4d19-b458-dd0134812228-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.317661 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74887f27-8838-4d19-b458-dd0134812228-logs" (OuterVolumeSpecName: "logs") pod "74887f27-8838-4d19-b458-dd0134812228" (UID: "74887f27-8838-4d19-b458-dd0134812228"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.324447 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-scripts" (OuterVolumeSpecName: "scripts") pod "74887f27-8838-4d19-b458-dd0134812228" (UID: "74887f27-8838-4d19-b458-dd0134812228"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.324628 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74887f27-8838-4d19-b458-dd0134812228-kube-api-access-rqjwz" (OuterVolumeSpecName: "kube-api-access-rqjwz") pod "74887f27-8838-4d19-b458-dd0134812228" (UID: "74887f27-8838-4d19-b458-dd0134812228"). InnerVolumeSpecName "kube-api-access-rqjwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.329793 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "74887f27-8838-4d19-b458-dd0134812228" (UID: "74887f27-8838-4d19-b458-dd0134812228"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.376597 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74887f27-8838-4d19-b458-dd0134812228" (UID: "74887f27-8838-4d19-b458-dd0134812228"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.393213 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data" (OuterVolumeSpecName: "config-data") pod "74887f27-8838-4d19-b458-dd0134812228" (UID: "74887f27-8838-4d19-b458-dd0134812228"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.418886 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.418931 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74887f27-8838-4d19-b458-dd0134812228-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.418944 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqjwz\" (UniqueName: \"kubernetes.io/projected/74887f27-8838-4d19-b458-dd0134812228-kube-api-access-rqjwz\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.418956 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.418968 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.418981 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74887f27-8838-4d19-b458-dd0134812228-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.475280 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-67d9579b5b-l9trm" Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.554202 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d5dc7fbb8-8h9gn"] Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.559333 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d5dc7fbb8-8h9gn" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon-log" containerID="cri-o://f44e7228793b3cf23069b72b399157c4fe605e6acbd5233674966340d89bd2a5" gracePeriod=30 Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.559474 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d5dc7fbb8-8h9gn" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" containerID="cri-o://6a6bb5208ac980078fcfd13dcef8211355367718988a765859e0cf35d85c777b" gracePeriod=30 Jan 29 09:26:19 crc kubenswrapper[4771]: I0129 09:26:19.684747 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b66bb6fb-89w2j"] Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.056787 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerStarted","Data":"92ff20facf34cd36aeb944e32f6b48933e256d4378fc1559dc21b4cc498b966a"} Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.061176 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b66bb6fb-89w2j" event={"ID":"93c318ae-5098-47e2-a09d-e67fe5124ed5","Type":"ContainerStarted","Data":"6e970be0722f93831e405a543a20c90e9aa945ecf6d1460fb8cfad263e9b03cf"} Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.061229 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b66bb6fb-89w2j" event={"ID":"93c318ae-5098-47e2-a09d-e67fe5124ed5","Type":"ContainerStarted","Data":"7b84891f5f764b70c0976306db0b44245b63504f2bbe77af51de5695197ff317"} Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.061286 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.129177 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.155424 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.167234 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:20 crc kubenswrapper[4771]: E0129 09:26:20.167813 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74887f27-8838-4d19-b458-dd0134812228" containerName="cinder-api-log" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.167827 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="74887f27-8838-4d19-b458-dd0134812228" containerName="cinder-api-log" Jan 29 09:26:20 crc kubenswrapper[4771]: E0129 09:26:20.168441 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74887f27-8838-4d19-b458-dd0134812228" containerName="cinder-api" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.168456 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="74887f27-8838-4d19-b458-dd0134812228" containerName="cinder-api" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.168729 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="74887f27-8838-4d19-b458-dd0134812228" containerName="cinder-api" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.168759 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="74887f27-8838-4d19-b458-dd0134812228" containerName="cinder-api-log" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.170092 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.176421 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.177036 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.177391 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.177419 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.240761 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.240831 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.240904 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.240931 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-config-data\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.240960 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.241004 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-scripts\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.241082 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.241110 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhn8\" (UniqueName: \"kubernetes.io/projected/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-kube-api-access-qjhn8\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.241494 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-logs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346031 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-logs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346169 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346190 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346234 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346255 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-config-data\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346273 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346313 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-scripts\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346325 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346423 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346456 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhn8\" (UniqueName: \"kubernetes.io/projected/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-kube-api-access-qjhn8\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.346761 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-logs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.355818 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.358005 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.358790 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.358798 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-config-data\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.359162 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.360231 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-scripts\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.369312 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhn8\" (UniqueName: \"kubernetes.io/projected/c483fbc3-2b55-4c4e-bb34-600f2fe18bd2-kube-api-access-qjhn8\") pod \"cinder-api-0\" (UID: \"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2\") " pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.543203 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.884593 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c0206a-2567-49ef-b02a-016a97c6e057" path="/var/lib/kubelet/pods/66c0206a-2567-49ef-b02a-016a97c6e057/volumes" Jan 29 09:26:20 crc kubenswrapper[4771]: I0129 09:26:20.886080 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74887f27-8838-4d19-b458-dd0134812228" path="/var/lib/kubelet/pods/74887f27-8838-4d19-b458-dd0134812228/volumes" Jan 29 09:26:21 crc kubenswrapper[4771]: I0129 09:26:21.080662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b66bb6fb-89w2j" event={"ID":"93c318ae-5098-47e2-a09d-e67fe5124ed5","Type":"ContainerStarted","Data":"abf3df875087724df91a8899b6b8bf9907ab788c78a4ca52c63660b05fcb8131"} Jan 29 09:26:21 crc kubenswrapper[4771]: I0129 09:26:21.081890 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:21 crc kubenswrapper[4771]: I0129 09:26:21.081964 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:21 crc kubenswrapper[4771]: I0129 09:26:21.089398 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerStarted","Data":"2be2e345d47efe833e5fe59635838ecf5c65b8dcb0e513be33f3276f26ad9e7f"} Jan 29 09:26:21 crc kubenswrapper[4771]: I0129 09:26:21.090273 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:26:21 crc kubenswrapper[4771]: I0129 09:26:21.112719 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5b66bb6fb-89w2j" podStartSLOduration=3.112680236 podStartE2EDuration="3.112680236s" podCreationTimestamp="2026-01-29 09:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:21.111099563 +0000 UTC m=+1201.233939800" watchObservedRunningTime="2026-01-29 09:26:21.112680236 +0000 UTC m=+1201.235520463" Jan 29 09:26:21 crc kubenswrapper[4771]: I0129 09:26:21.255147 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 09:26:21 crc kubenswrapper[4771]: I0129 09:26:21.579532 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 09:26:22 crc kubenswrapper[4771]: I0129 09:26:22.130296 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerStarted","Data":"6c43a914f08e178e08a1b93baa148b01925bcac1d37f5afb326eb20d290f501a"} Jan 29 09:26:22 crc kubenswrapper[4771]: I0129 09:26:22.136089 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2","Type":"ContainerStarted","Data":"75da858054e528e2b3d9353f0b64c4e2347aca30a040151e23af2aecbbe2ad9a"} Jan 29 09:26:22 crc kubenswrapper[4771]: I0129 09:26:22.136133 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2","Type":"ContainerStarted","Data":"48cdc5ba1409171eb9a53879f335f68b7ceafa8fe32ef481472ecd65e245a384"} Jan 29 09:26:22 crc kubenswrapper[4771]: I0129 09:26:22.258318 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:23 crc kubenswrapper[4771]: I0129 09:26:23.002585 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d5dc7fbb8-8h9gn" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:41136->10.217.0.147:8443: read: connection reset by peer" Jan 29 09:26:23 crc kubenswrapper[4771]: I0129 09:26:23.154996 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c483fbc3-2b55-4c4e-bb34-600f2fe18bd2","Type":"ContainerStarted","Data":"fda8151f663eb66e6092dfb59e3a892687b41f79bc50ae509d7ab1eee60f3ad1"} Jan 29 09:26:23 crc kubenswrapper[4771]: I0129 09:26:23.156455 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 09:26:23 crc kubenswrapper[4771]: I0129 09:26:23.166268 4771 generic.go:334] "Generic (PLEG): container finished" podID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerID="6a6bb5208ac980078fcfd13dcef8211355367718988a765859e0cf35d85c777b" exitCode=0 Jan 29 09:26:23 crc kubenswrapper[4771]: I0129 09:26:23.166488 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerName="cinder-scheduler" containerID="cri-o://1a4843cba8c89c39cb2f252de5fcc3c6837c8cc5a16eaf14d9a4009a4b8afa76" gracePeriod=30 Jan 29 09:26:23 crc kubenswrapper[4771]: I0129 09:26:23.166752 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5dc7fbb8-8h9gn" event={"ID":"55cedb34-7c52-47ec-8f60-5d3e362f5948","Type":"ContainerDied","Data":"6a6bb5208ac980078fcfd13dcef8211355367718988a765859e0cf35d85c777b"} Jan 29 09:26:23 crc kubenswrapper[4771]: I0129 09:26:23.166811 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerName="probe" containerID="cri-o://eef4ef30a3353403062f11e8744ad1b0c7bb15d1965e6ee9bb01c9a7590d6867" gracePeriod=30 Jan 29 09:26:23 crc kubenswrapper[4771]: I0129 09:26:23.185975 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.185946517 podStartE2EDuration="3.185946517s" podCreationTimestamp="2026-01-29 09:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:23.177020323 +0000 UTC m=+1203.299860550" watchObservedRunningTime="2026-01-29 09:26:23.185946517 +0000 UTC m=+1203.308786744" Jan 29 09:26:24 crc kubenswrapper[4771]: I0129 09:26:24.194573 4771 generic.go:334] "Generic (PLEG): container finished" podID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerID="eef4ef30a3353403062f11e8744ad1b0c7bb15d1965e6ee9bb01c9a7590d6867" exitCode=0 Jan 29 09:26:24 crc kubenswrapper[4771]: I0129 09:26:24.194661 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4cd08fba-2e76-4705-b4c8-8adc6266f56d","Type":"ContainerDied","Data":"eef4ef30a3353403062f11e8744ad1b0c7bb15d1965e6ee9bb01c9a7590d6867"} Jan 29 09:26:24 crc kubenswrapper[4771]: I0129 09:26:24.199847 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerStarted","Data":"069432868da7d545b505c4e37161c7b68a2787282c3db982c333f58b95f3f12c"} Jan 29 09:26:24 crc kubenswrapper[4771]: I0129 09:26:24.200214 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:26:25 crc kubenswrapper[4771]: I0129 09:26:25.511900 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:26:25 crc kubenswrapper[4771]: I0129 09:26:25.529780 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.109795956 podStartE2EDuration="8.5297603s" podCreationTimestamp="2026-01-29 09:26:17 +0000 UTC" firstStartedPulling="2026-01-29 09:26:18.621360221 +0000 UTC m=+1198.744200448" lastFinishedPulling="2026-01-29 09:26:23.041324565 +0000 UTC m=+1203.164164792" observedRunningTime="2026-01-29 09:26:24.224154646 +0000 UTC m=+1204.346994873" watchObservedRunningTime="2026-01-29 09:26:25.5297603 +0000 UTC m=+1205.652600527" Jan 29 09:26:25 crc kubenswrapper[4771]: I0129 09:26:25.599144 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-nqf55"] Jan 29 09:26:25 crc kubenswrapper[4771]: I0129 09:26:25.599656 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" podUID="0de35acf-081e-4511-8445-dd3e1d7ead0e" containerName="dnsmasq-dns" containerID="cri-o://4b593668ee323d77a62a50829c845533b664528d4d107125468161cb01b9ddbb" gracePeriod=10 Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.223845 4771 generic.go:334] "Generic (PLEG): container finished" podID="0de35acf-081e-4511-8445-dd3e1d7ead0e" containerID="4b593668ee323d77a62a50829c845533b664528d4d107125468161cb01b9ddbb" exitCode=0 Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.224213 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" event={"ID":"0de35acf-081e-4511-8445-dd3e1d7ead0e","Type":"ContainerDied","Data":"4b593668ee323d77a62a50829c845533b664528d4d107125468161cb01b9ddbb"} Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.224242 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" event={"ID":"0de35acf-081e-4511-8445-dd3e1d7ead0e","Type":"ContainerDied","Data":"9332cd9cf3edf4b31981d98fdf49314f2523b00863091880ee10d92e0195e74b"} Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.224254 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9332cd9cf3edf4b31981d98fdf49314f2523b00863091880ee10d92e0195e74b" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.230500 4771 generic.go:334] "Generic (PLEG): container finished" podID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerID="1a4843cba8c89c39cb2f252de5fcc3c6837c8cc5a16eaf14d9a4009a4b8afa76" exitCode=0 Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.230583 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4cd08fba-2e76-4705-b4c8-8adc6266f56d","Type":"ContainerDied","Data":"1a4843cba8c89c39cb2f252de5fcc3c6837c8cc5a16eaf14d9a4009a4b8afa76"} Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.230616 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4cd08fba-2e76-4705-b4c8-8adc6266f56d","Type":"ContainerDied","Data":"e36186061f053319b573cc5b506a5b29ffeb8def5d0ce33de93d83791413d396"} Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.230651 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e36186061f053319b573cc5b506a5b29ffeb8def5d0ce33de93d83791413d396" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.295207 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.309868 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.383879 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-nb\") pod \"0de35acf-081e-4511-8445-dd3e1d7ead0e\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384053 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-dns-svc\") pod \"0de35acf-081e-4511-8445-dd3e1d7ead0e\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384087 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data\") pod \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384120 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcxl9\" (UniqueName: \"kubernetes.io/projected/0de35acf-081e-4511-8445-dd3e1d7ead0e-kube-api-access-xcxl9\") pod \"0de35acf-081e-4511-8445-dd3e1d7ead0e\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384148 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-scripts\") pod \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384175 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7d54\" (UniqueName: \"kubernetes.io/projected/4cd08fba-2e76-4705-b4c8-8adc6266f56d-kube-api-access-v7d54\") pod \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-sb\") pod \"0de35acf-081e-4511-8445-dd3e1d7ead0e\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384308 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-combined-ca-bundle\") pod \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384440 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-config\") pod \"0de35acf-081e-4511-8445-dd3e1d7ead0e\" (UID: \"0de35acf-081e-4511-8445-dd3e1d7ead0e\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.384475 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4cd08fba-2e76-4705-b4c8-8adc6266f56d-etc-machine-id\") pod \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.385079 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data-custom\") pod \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\" (UID: \"4cd08fba-2e76-4705-b4c8-8adc6266f56d\") " Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.398058 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd08fba-2e76-4705-b4c8-8adc6266f56d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4cd08fba-2e76-4705-b4c8-8adc6266f56d" (UID: "4cd08fba-2e76-4705-b4c8-8adc6266f56d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.398898 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4cd08fba-2e76-4705-b4c8-8adc6266f56d" (UID: "4cd08fba-2e76-4705-b4c8-8adc6266f56d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.400866 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd08fba-2e76-4705-b4c8-8adc6266f56d-kube-api-access-v7d54" (OuterVolumeSpecName: "kube-api-access-v7d54") pod "4cd08fba-2e76-4705-b4c8-8adc6266f56d" (UID: "4cd08fba-2e76-4705-b4c8-8adc6266f56d"). InnerVolumeSpecName "kube-api-access-v7d54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.405872 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de35acf-081e-4511-8445-dd3e1d7ead0e-kube-api-access-xcxl9" (OuterVolumeSpecName: "kube-api-access-xcxl9") pod "0de35acf-081e-4511-8445-dd3e1d7ead0e" (UID: "0de35acf-081e-4511-8445-dd3e1d7ead0e"). InnerVolumeSpecName "kube-api-access-xcxl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.411856 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-scripts" (OuterVolumeSpecName: "scripts") pod "4cd08fba-2e76-4705-b4c8-8adc6266f56d" (UID: "4cd08fba-2e76-4705-b4c8-8adc6266f56d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.477652 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0de35acf-081e-4511-8445-dd3e1d7ead0e" (UID: "0de35acf-081e-4511-8445-dd3e1d7ead0e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.478225 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-config" (OuterVolumeSpecName: "config") pod "0de35acf-081e-4511-8445-dd3e1d7ead0e" (UID: "0de35acf-081e-4511-8445-dd3e1d7ead0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.478738 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0de35acf-081e-4511-8445-dd3e1d7ead0e" (UID: "0de35acf-081e-4511-8445-dd3e1d7ead0e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.480842 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cd08fba-2e76-4705-b4c8-8adc6266f56d" (UID: "4cd08fba-2e76-4705-b4c8-8adc6266f56d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.488952 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcxl9\" (UniqueName: \"kubernetes.io/projected/0de35acf-081e-4511-8445-dd3e1d7ead0e-kube-api-access-xcxl9\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.489196 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.489255 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7d54\" (UniqueName: \"kubernetes.io/projected/4cd08fba-2e76-4705-b4c8-8adc6266f56d-kube-api-access-v7d54\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.489311 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.489390 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.489446 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.489507 4771 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4cd08fba-2e76-4705-b4c8-8adc6266f56d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.489558 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.489611 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.493810 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0de35acf-081e-4511-8445-dd3e1d7ead0e" (UID: "0de35acf-081e-4511-8445-dd3e1d7ead0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.522970 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data" (OuterVolumeSpecName: "config-data") pod "4cd08fba-2e76-4705-b4c8-8adc6266f56d" (UID: "4cd08fba-2e76-4705-b4c8-8adc6266f56d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.591658 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0de35acf-081e-4511-8445-dd3e1d7ead0e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.591964 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd08fba-2e76-4705-b4c8-8adc6266f56d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:26 crc kubenswrapper[4771]: I0129 09:26:26.669591 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-67ddc4bf8b-n46xf" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.238730 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-nqf55" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.238772 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.269642 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-nqf55"] Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.280276 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-nqf55"] Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.295214 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.306532 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.316385 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:27 crc kubenswrapper[4771]: E0129 09:26:27.316840 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerName="probe" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.316857 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerName="probe" Jan 29 09:26:27 crc kubenswrapper[4771]: E0129 09:26:27.316877 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de35acf-081e-4511-8445-dd3e1d7ead0e" containerName="dnsmasq-dns" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.316883 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de35acf-081e-4511-8445-dd3e1d7ead0e" containerName="dnsmasq-dns" Jan 29 09:26:27 crc kubenswrapper[4771]: E0129 09:26:27.316905 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de35acf-081e-4511-8445-dd3e1d7ead0e" containerName="init" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.316911 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de35acf-081e-4511-8445-dd3e1d7ead0e" containerName="init" Jan 29 09:26:27 crc kubenswrapper[4771]: E0129 09:26:27.316936 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerName="cinder-scheduler" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.316942 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerName="cinder-scheduler" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.317095 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerName="probe" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.317110 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" containerName="cinder-scheduler" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.317127 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de35acf-081e-4511-8445-dd3e1d7ead0e" containerName="dnsmasq-dns" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.318181 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.320203 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.328336 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.415871 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-scripts\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.416055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh75\" (UniqueName: \"kubernetes.io/projected/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-kube-api-access-vmh75\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.416152 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.416184 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-config-data\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.416283 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.416348 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.518288 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh75\" (UniqueName: \"kubernetes.io/projected/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-kube-api-access-vmh75\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.518369 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.518401 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-config-data\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.518493 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.518553 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.518605 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-scripts\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.519424 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.523397 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-scripts\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.523847 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.524620 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-config-data\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.530425 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.540217 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh75\" (UniqueName: \"kubernetes.io/projected/0c71ced2-21b6-42f9-bcf8-1d844b6402ab-kube-api-access-vmh75\") pod \"cinder-scheduler-0\" (UID: \"0c71ced2-21b6-42f9-bcf8-1d844b6402ab\") " pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.635712 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.657525 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.659029 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.661659 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.662234 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pxmr2" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.663589 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.668351 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.733329 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.733863 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.733969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzdx\" (UniqueName: \"kubernetes.io/projected/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-kube-api-access-jnzdx\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.734125 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.836309 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnzdx\" (UniqueName: \"kubernetes.io/projected/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-kube-api-access-jnzdx\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.836412 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.836547 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.836833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.838203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.850553 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.856407 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.866347 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnzdx\" (UniqueName: \"kubernetes.io/projected/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-kube-api-access-jnzdx\") pod \"openstackclient\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.956375 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.957905 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:26:27 crc kubenswrapper[4771]: I0129 09:26:27.968557 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.053230 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.054573 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.080907 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.141469 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:26:28 crc kubenswrapper[4771]: E0129 09:26:28.147574 4771 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 09:26:28 crc kubenswrapper[4771]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_8de6bb3f-ce7d-4b05-aba0-64e8df3860a6_0(c594ae5cc1957098b0f299dd88570faae63d6ceb4d403e0201cc0140716a14e6): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c594ae5cc1957098b0f299dd88570faae63d6ceb4d403e0201cc0140716a14e6" Netns:"/var/run/netns/7e562e67-6e6c-41a9-b0d3-d6c996d149ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c594ae5cc1957098b0f299dd88570faae63d6ceb4d403e0201cc0140716a14e6;K8S_POD_UID=8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6]: expected pod UID "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" but got "0605e923-8ce6-4789-89f7-214d47422865" from Kube API Jan 29 09:26:28 crc kubenswrapper[4771]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 09:26:28 crc kubenswrapper[4771]: > Jan 29 09:26:28 crc kubenswrapper[4771]: E0129 09:26:28.147686 4771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 09:26:28 crc kubenswrapper[4771]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_8de6bb3f-ce7d-4b05-aba0-64e8df3860a6_0(c594ae5cc1957098b0f299dd88570faae63d6ceb4d403e0201cc0140716a14e6): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c594ae5cc1957098b0f299dd88570faae63d6ceb4d403e0201cc0140716a14e6" Netns:"/var/run/netns/7e562e67-6e6c-41a9-b0d3-d6c996d149ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c594ae5cc1957098b0f299dd88570faae63d6ceb4d403e0201cc0140716a14e6;K8S_POD_UID=8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6]: expected pod UID "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" but got "0605e923-8ce6-4789-89f7-214d47422865" from Kube API Jan 29 09:26:28 crc kubenswrapper[4771]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 09:26:28 crc kubenswrapper[4771]: > pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.147962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0605e923-8ce6-4789-89f7-214d47422865-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.148029 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0605e923-8ce6-4789-89f7-214d47422865-openstack-config\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.148077 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0605e923-8ce6-4789-89f7-214d47422865-openstack-config-secret\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.148114 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st7wq\" (UniqueName: \"kubernetes.io/projected/0605e923-8ce6-4789-89f7-214d47422865-kube-api-access-st7wq\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: W0129 09:26:28.168121 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c71ced2_21b6_42f9_bcf8_1d844b6402ab.slice/crio-926aec4c38cdde6b215c8514069222f971d75f8b0ec0646415e0b4c9464aa034 WatchSource:0}: Error finding container 926aec4c38cdde6b215c8514069222f971d75f8b0ec0646415e0b4c9464aa034: Status 404 returned error can't find the container with id 926aec4c38cdde6b215c8514069222f971d75f8b0ec0646415e0b4c9464aa034 Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.254538 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0605e923-8ce6-4789-89f7-214d47422865-openstack-config-secret\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.254649 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st7wq\" (UniqueName: \"kubernetes.io/projected/0605e923-8ce6-4789-89f7-214d47422865-kube-api-access-st7wq\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.254819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0605e923-8ce6-4789-89f7-214d47422865-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.254917 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0605e923-8ce6-4789-89f7-214d47422865-openstack-config\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.256384 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0605e923-8ce6-4789-89f7-214d47422865-openstack-config\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.260913 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0605e923-8ce6-4789-89f7-214d47422865-openstack-config-secret\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.267194 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0605e923-8ce6-4789-89f7-214d47422865-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.282121 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.282780 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c71ced2-21b6-42f9-bcf8-1d844b6402ab","Type":"ContainerStarted","Data":"926aec4c38cdde6b215c8514069222f971d75f8b0ec0646415e0b4c9464aa034"} Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.283392 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st7wq\" (UniqueName: \"kubernetes.io/projected/0605e923-8ce6-4789-89f7-214d47422865-kube-api-access-st7wq\") pod \"openstackclient\" (UID: \"0605e923-8ce6-4789-89f7-214d47422865\") " pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.294488 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" podUID="0605e923-8ce6-4789-89f7-214d47422865" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.303835 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.427935 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.459311 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config-secret\") pod \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.459434 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-combined-ca-bundle\") pod \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.459546 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnzdx\" (UniqueName: \"kubernetes.io/projected/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-kube-api-access-jnzdx\") pod \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.459650 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config\") pod \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\" (UID: \"8de6bb3f-ce7d-4b05-aba0-64e8df3860a6\") " Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.464644 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" (UID: "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.466410 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" (UID: "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.468039 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" (UID: "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.468929 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-kube-api-access-jnzdx" (OuterVolumeSpecName: "kube-api-access-jnzdx") pod "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" (UID: "8de6bb3f-ce7d-4b05-aba0-64e8df3860a6"). InnerVolumeSpecName "kube-api-access-jnzdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.562467 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.562518 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.562533 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnzdx\" (UniqueName: \"kubernetes.io/projected/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-kube-api-access-jnzdx\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.562547 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.849639 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de35acf-081e-4511-8445-dd3e1d7ead0e" path="/var/lib/kubelet/pods/0de35acf-081e-4511-8445-dd3e1d7ead0e/volumes" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.850242 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd08fba-2e76-4705-b4c8-8adc6266f56d" path="/var/lib/kubelet/pods/4cd08fba-2e76-4705-b4c8-8adc6266f56d/volumes" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.850972 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" path="/var/lib/kubelet/pods/8de6bb3f-ce7d-4b05-aba0-64e8df3860a6/volumes" Jan 29 09:26:28 crc kubenswrapper[4771]: I0129 09:26:28.908348 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 09:26:29 crc kubenswrapper[4771]: I0129 09:26:29.300308 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c71ced2-21b6-42f9-bcf8-1d844b6402ab","Type":"ContainerStarted","Data":"607d02a8780c66c7b9dfae3626e4bb70ea8900bfc96c5d49b3c95c8342402447"} Jan 29 09:26:29 crc kubenswrapper[4771]: I0129 09:26:29.316804 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:26:29 crc kubenswrapper[4771]: I0129 09:26:29.316851 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0605e923-8ce6-4789-89f7-214d47422865","Type":"ContainerStarted","Data":"307a124e7c81f149d9f6c9a2754da89836a1b014707ea5b4824394f36c092a48"} Jan 29 09:26:29 crc kubenswrapper[4771]: I0129 09:26:29.328155 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8de6bb3f-ce7d-4b05-aba0-64e8df3860a6" podUID="0605e923-8ce6-4789-89f7-214d47422865" Jan 29 09:26:30 crc kubenswrapper[4771]: I0129 09:26:30.342596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c71ced2-21b6-42f9-bcf8-1d844b6402ab","Type":"ContainerStarted","Data":"4ee197e7ff35b8ae284c3c20161d566fc28b4da2252fff2f3c9f34ebffd2f8ed"} Jan 29 09:26:30 crc kubenswrapper[4771]: I0129 09:26:30.373591 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.373572186 podStartE2EDuration="3.373572186s" podCreationTimestamp="2026-01-29 09:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:30.365549827 +0000 UTC m=+1210.488390054" watchObservedRunningTime="2026-01-29 09:26:30.373572186 +0000 UTC m=+1210.496412413" Jan 29 09:26:30 crc kubenswrapper[4771]: I0129 09:26:30.602658 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d5dc7fbb8-8h9gn" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.636637 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.689080 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-55bc4c6647-vmgxk"] Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.695084 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.700394 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.700722 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.700843 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.740585 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55bc4c6647-vmgxk"] Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.851723 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.889237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70ca5b45-1804-4830-8ede-b28279d8d4ce-etc-swift\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.889290 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-public-tls-certs\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.889355 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ca5b45-1804-4830-8ede-b28279d8d4ce-run-httpd\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.889390 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-combined-ca-bundle\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.889418 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55dq\" (UniqueName: \"kubernetes.io/projected/70ca5b45-1804-4830-8ede-b28279d8d4ce-kube-api-access-r55dq\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.889492 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-internal-tls-certs\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.889517 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ca5b45-1804-4830-8ede-b28279d8d4ce-log-httpd\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.889555 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-config-data\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.993655 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70ca5b45-1804-4830-8ede-b28279d8d4ce-etc-swift\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.996047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-public-tls-certs\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.996235 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ca5b45-1804-4830-8ede-b28279d8d4ce-run-httpd\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.996279 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-combined-ca-bundle\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.996360 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55dq\" (UniqueName: \"kubernetes.io/projected/70ca5b45-1804-4830-8ede-b28279d8d4ce-kube-api-access-r55dq\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.996464 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-internal-tls-certs\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.996503 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ca5b45-1804-4830-8ede-b28279d8d4ce-log-httpd\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.996584 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-config-data\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:32 crc kubenswrapper[4771]: I0129 09:26:32.997436 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ca5b45-1804-4830-8ede-b28279d8d4ce-run-httpd\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.002872 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-internal-tls-certs\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.003590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-public-tls-certs\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.003992 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ca5b45-1804-4830-8ede-b28279d8d4ce-log-httpd\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.004688 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70ca5b45-1804-4830-8ede-b28279d8d4ce-etc-swift\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.009452 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-combined-ca-bundle\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.016531 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ca5b45-1804-4830-8ede-b28279d8d4ce-config-data\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.018560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55dq\" (UniqueName: \"kubernetes.io/projected/70ca5b45-1804-4830-8ede-b28279d8d4ce-kube-api-access-r55dq\") pod \"swift-proxy-55bc4c6647-vmgxk\" (UID: \"70ca5b45-1804-4830-8ede-b28279d8d4ce\") " pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.022274 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:33 crc kubenswrapper[4771]: I0129 09:26:33.682808 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55bc4c6647-vmgxk"] Jan 29 09:26:33 crc kubenswrapper[4771]: W0129 09:26:33.686104 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70ca5b45_1804_4830_8ede_b28279d8d4ce.slice/crio-2b50e31873e41b0d56eb08cc64ce7995db7556f535f0b10cf7417bdd3ab63cda WatchSource:0}: Error finding container 2b50e31873e41b0d56eb08cc64ce7995db7556f535f0b10cf7417bdd3ab63cda: Status 404 returned error can't find the container with id 2b50e31873e41b0d56eb08cc64ce7995db7556f535f0b10cf7417bdd3ab63cda Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.305590 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.306420 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="ceilometer-central-agent" containerID="cri-o://92ff20facf34cd36aeb944e32f6b48933e256d4378fc1559dc21b4cc498b966a" gracePeriod=30 Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.306588 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="proxy-httpd" containerID="cri-o://069432868da7d545b505c4e37161c7b68a2787282c3db982c333f58b95f3f12c" gracePeriod=30 Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.306646 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="sg-core" containerID="cri-o://6c43a914f08e178e08a1b93baa148b01925bcac1d37f5afb326eb20d290f501a" gracePeriod=30 Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.306687 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="ceilometer-notification-agent" containerID="cri-o://2be2e345d47efe833e5fe59635838ecf5c65b8dcb0e513be33f3276f26ad9e7f" gracePeriod=30 Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.343840 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.406420 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55bc4c6647-vmgxk" event={"ID":"70ca5b45-1804-4830-8ede-b28279d8d4ce","Type":"ContainerStarted","Data":"b497e7def836edd35231ffe9535fe5a490e863eb51681e4b380ef0ce8652e2aa"} Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.406859 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55bc4c6647-vmgxk" event={"ID":"70ca5b45-1804-4830-8ede-b28279d8d4ce","Type":"ContainerStarted","Data":"682e43f945d71d01d7044b19edcee257ba7d9d5a2fbcd9ad5bb1e653d2fb086b"} Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.406874 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55bc4c6647-vmgxk" event={"ID":"70ca5b45-1804-4830-8ede-b28279d8d4ce","Type":"ContainerStarted","Data":"2b50e31873e41b0d56eb08cc64ce7995db7556f535f0b10cf7417bdd3ab63cda"} Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.407061 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.414063 4771 generic.go:334] "Generic (PLEG): container finished" podID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerID="354b820fb836e7e1773668337c4da3f7edbaea603e4e508869f3abe56328ac5b" exitCode=137 Jan 29 09:26:34 crc kubenswrapper[4771]: I0129 09:26:34.414122 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df94697f-2fq9x" event={"ID":"4f3a9db6-57bc-4625-a2de-29c8ba725e10","Type":"ContainerDied","Data":"354b820fb836e7e1773668337c4da3f7edbaea603e4e508869f3abe56328ac5b"} Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.237855 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-55bc4c6647-vmgxk" podStartSLOduration=3.23782897 podStartE2EDuration="3.23782897s" podCreationTimestamp="2026-01-29 09:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:34.434780577 +0000 UTC m=+1214.557620814" watchObservedRunningTime="2026-01-29 09:26:35.23782897 +0000 UTC m=+1215.360669187" Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.241567 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.241897 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerName="glance-log" containerID="cri-o://e46deb98db0755606370454eeb2e8aa8e1947e692c761c9eddfe2b0bae494804" gracePeriod=30 Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.242081 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerName="glance-httpd" containerID="cri-o://90f040df567e538cab9b20742f823d8c87533b8d82b17dce94f172e073431a1e" gracePeriod=30 Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.430803 4771 generic.go:334] "Generic (PLEG): container finished" podID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerID="e46deb98db0755606370454eeb2e8aa8e1947e692c761c9eddfe2b0bae494804" exitCode=143 Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.430894 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af79ed5e-0bb8-4195-b0bb-658ef8106824","Type":"ContainerDied","Data":"e46deb98db0755606370454eeb2e8aa8e1947e692c761c9eddfe2b0bae494804"} Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.438610 4771 generic.go:334] "Generic (PLEG): container finished" podID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerID="069432868da7d545b505c4e37161c7b68a2787282c3db982c333f58b95f3f12c" exitCode=0 Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.438651 4771 generic.go:334] "Generic (PLEG): container finished" podID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerID="6c43a914f08e178e08a1b93baa148b01925bcac1d37f5afb326eb20d290f501a" exitCode=2 Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.438660 4771 generic.go:334] "Generic (PLEG): container finished" podID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerID="92ff20facf34cd36aeb944e32f6b48933e256d4378fc1559dc21b4cc498b966a" exitCode=0 Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.438723 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerDied","Data":"069432868da7d545b505c4e37161c7b68a2787282c3db982c333f58b95f3f12c"} Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.438785 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerDied","Data":"6c43a914f08e178e08a1b93baa148b01925bcac1d37f5afb326eb20d290f501a"} Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.438799 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerDied","Data":"92ff20facf34cd36aeb944e32f6b48933e256d4378fc1559dc21b4cc498b966a"} Jan 29 09:26:35 crc kubenswrapper[4771]: I0129 09:26:35.439001 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.450959 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-frmrw"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.453728 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.466465 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-frmrw"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.547268 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkjsm\" (UniqueName: \"kubernetes.io/projected/a56c69ee-17ef-4f36-b312-8a7ba10df44a-kube-api-access-kkjsm\") pod \"nova-api-db-create-frmrw\" (UID: \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\") " pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.547633 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a56c69ee-17ef-4f36-b312-8a7ba10df44a-operator-scripts\") pod \"nova-api-db-create-frmrw\" (UID: \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\") " pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.607161 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4tqtd"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.609754 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.631238 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4tqtd"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.651279 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwdj\" (UniqueName: \"kubernetes.io/projected/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-kube-api-access-jwwdj\") pod \"nova-cell0-db-create-4tqtd\" (UID: \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\") " pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.651379 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkjsm\" (UniqueName: \"kubernetes.io/projected/a56c69ee-17ef-4f36-b312-8a7ba10df44a-kube-api-access-kkjsm\") pod \"nova-api-db-create-frmrw\" (UID: \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\") " pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.651496 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a56c69ee-17ef-4f36-b312-8a7ba10df44a-operator-scripts\") pod \"nova-api-db-create-frmrw\" (UID: \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\") " pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.651518 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-operator-scripts\") pod \"nova-cell0-db-create-4tqtd\" (UID: \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\") " pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.652850 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a56c69ee-17ef-4f36-b312-8a7ba10df44a-operator-scripts\") pod \"nova-api-db-create-frmrw\" (UID: \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\") " pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.668063 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-aa3d-account-create-update-s9v84"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.669835 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.675238 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.679011 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-aa3d-account-create-update-s9v84"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.695411 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkjsm\" (UniqueName: \"kubernetes.io/projected/a56c69ee-17ef-4f36-b312-8a7ba10df44a-kube-api-access-kkjsm\") pod \"nova-api-db-create-frmrw\" (UID: \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\") " pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.753667 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-operator-scripts\") pod \"nova-cell0-db-create-4tqtd\" (UID: \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\") " pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.754411 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-operator-scripts\") pod \"nova-cell0-db-create-4tqtd\" (UID: \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\") " pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.754564 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-operator-scripts\") pod \"nova-api-aa3d-account-create-update-s9v84\" (UID: \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\") " pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.754714 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwrdr\" (UniqueName: \"kubernetes.io/projected/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-kube-api-access-kwrdr\") pod \"nova-api-aa3d-account-create-update-s9v84\" (UID: \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\") " pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.754751 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwwdj\" (UniqueName: \"kubernetes.io/projected/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-kube-api-access-jwwdj\") pod \"nova-cell0-db-create-4tqtd\" (UID: \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\") " pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.768662 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jq6c5"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.771346 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.773717 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jq6c5"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.774340 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwwdj\" (UniqueName: \"kubernetes.io/projected/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-kube-api-access-jwwdj\") pod \"nova-cell0-db-create-4tqtd\" (UID: \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\") " pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.830333 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.860768 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-operator-scripts\") pod \"nova-api-aa3d-account-create-update-s9v84\" (UID: \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\") " pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.860843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7037e6e8-a6d3-417e-9a83-091fd1492909-operator-scripts\") pod \"nova-cell1-db-create-jq6c5\" (UID: \"7037e6e8-a6d3-417e-9a83-091fd1492909\") " pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.860894 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwrdr\" (UniqueName: \"kubernetes.io/projected/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-kube-api-access-kwrdr\") pod \"nova-api-aa3d-account-create-update-s9v84\" (UID: \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\") " pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.861062 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb4qv\" (UniqueName: \"kubernetes.io/projected/7037e6e8-a6d3-417e-9a83-091fd1492909-kube-api-access-tb4qv\") pod \"nova-cell1-db-create-jq6c5\" (UID: \"7037e6e8-a6d3-417e-9a83-091fd1492909\") " pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.862224 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-operator-scripts\") pod \"nova-api-aa3d-account-create-update-s9v84\" (UID: \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\") " pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.866911 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2549-account-create-update-7mcrn"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.870563 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.874773 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.886302 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwrdr\" (UniqueName: \"kubernetes.io/projected/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-kube-api-access-kwrdr\") pod \"nova-api-aa3d-account-create-update-s9v84\" (UID: \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\") " pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.908671 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2549-account-create-update-7mcrn"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.936264 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.978558 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wpj8\" (UniqueName: \"kubernetes.io/projected/dedabb0e-4487-468e-91d2-af4e8767a0d9-kube-api-access-8wpj8\") pod \"nova-cell0-2549-account-create-update-7mcrn\" (UID: \"dedabb0e-4487-468e-91d2-af4e8767a0d9\") " pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.979089 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb4qv\" (UniqueName: \"kubernetes.io/projected/7037e6e8-a6d3-417e-9a83-091fd1492909-kube-api-access-tb4qv\") pod \"nova-cell1-db-create-jq6c5\" (UID: \"7037e6e8-a6d3-417e-9a83-091fd1492909\") " pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.979246 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7037e6e8-a6d3-417e-9a83-091fd1492909-operator-scripts\") pod \"nova-cell1-db-create-jq6c5\" (UID: \"7037e6e8-a6d3-417e-9a83-091fd1492909\") " pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.980437 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7037e6e8-a6d3-417e-9a83-091fd1492909-operator-scripts\") pod \"nova-cell1-db-create-jq6c5\" (UID: \"7037e6e8-a6d3-417e-9a83-091fd1492909\") " pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.979317 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedabb0e-4487-468e-91d2-af4e8767a0d9-operator-scripts\") pod \"nova-cell0-2549-account-create-update-7mcrn\" (UID: \"dedabb0e-4487-468e-91d2-af4e8767a0d9\") " pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.990703 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-edb4-account-create-update-n2gmm"] Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.993012 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:37 crc kubenswrapper[4771]: I0129 09:26:37.999323 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb4qv\" (UniqueName: \"kubernetes.io/projected/7037e6e8-a6d3-417e-9a83-091fd1492909-kube-api-access-tb4qv\") pod \"nova-cell1-db-create-jq6c5\" (UID: \"7037e6e8-a6d3-417e-9a83-091fd1492909\") " pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.000510 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.033129 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-edb4-account-create-update-n2gmm"] Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.054977 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.055976 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.103339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedabb0e-4487-468e-91d2-af4e8767a0d9-operator-scripts\") pod \"nova-cell0-2549-account-create-update-7mcrn\" (UID: \"dedabb0e-4487-468e-91d2-af4e8767a0d9\") " pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.103959 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99s6\" (UniqueName: \"kubernetes.io/projected/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-kube-api-access-d99s6\") pod \"nova-cell1-edb4-account-create-update-n2gmm\" (UID: \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\") " pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.104062 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wpj8\" (UniqueName: \"kubernetes.io/projected/dedabb0e-4487-468e-91d2-af4e8767a0d9-kube-api-access-8wpj8\") pod \"nova-cell0-2549-account-create-update-7mcrn\" (UID: \"dedabb0e-4487-468e-91d2-af4e8767a0d9\") " pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.104235 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-operator-scripts\") pod \"nova-cell1-edb4-account-create-update-n2gmm\" (UID: \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\") " pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.104794 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedabb0e-4487-468e-91d2-af4e8767a0d9-operator-scripts\") pod \"nova-cell0-2549-account-create-update-7mcrn\" (UID: \"dedabb0e-4487-468e-91d2-af4e8767a0d9\") " pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.134436 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.145543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wpj8\" (UniqueName: \"kubernetes.io/projected/dedabb0e-4487-468e-91d2-af4e8767a0d9-kube-api-access-8wpj8\") pod \"nova-cell0-2549-account-create-update-7mcrn\" (UID: \"dedabb0e-4487-468e-91d2-af4e8767a0d9\") " pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.151491 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.206240 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99s6\" (UniqueName: \"kubernetes.io/projected/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-kube-api-access-d99s6\") pod \"nova-cell1-edb4-account-create-update-n2gmm\" (UID: \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\") " pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.208028 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-operator-scripts\") pod \"nova-cell1-edb4-account-create-update-n2gmm\" (UID: \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\") " pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.209727 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-operator-scripts\") pod \"nova-cell1-edb4-account-create-update-n2gmm\" (UID: \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\") " pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.225334 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99s6\" (UniqueName: \"kubernetes.io/projected/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-kube-api-access-d99s6\") pod \"nova-cell1-edb4-account-create-update-n2gmm\" (UID: \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\") " pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.267824 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.371348 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.539457 4771 generic.go:334] "Generic (PLEG): container finished" podID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerID="90f040df567e538cab9b20742f823d8c87533b8d82b17dce94f172e073431a1e" exitCode=0 Jan 29 09:26:38 crc kubenswrapper[4771]: I0129 09:26:38.539569 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af79ed5e-0bb8-4195-b0bb-658ef8106824","Type":"ContainerDied","Data":"90f040df567e538cab9b20742f823d8c87533b8d82b17dce94f172e073431a1e"} Jan 29 09:26:39 crc kubenswrapper[4771]: I0129 09:26:39.558507 4771 generic.go:334] "Generic (PLEG): container finished" podID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerID="2be2e345d47efe833e5fe59635838ecf5c65b8dcb0e513be33f3276f26ad9e7f" exitCode=0 Jan 29 09:26:39 crc kubenswrapper[4771]: I0129 09:26:39.558604 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerDied","Data":"2be2e345d47efe833e5fe59635838ecf5c65b8dcb0e513be33f3276f26ad9e7f"} Jan 29 09:26:39 crc kubenswrapper[4771]: I0129 09:26:39.979621 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:26:39 crc kubenswrapper[4771]: I0129 09:26:39.980022 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-httpd" containerID="cri-o://9c353bc4595acbc17a7cc233454a1abcd95f0ccb2d10ac94976139ece910ef0c" gracePeriod=30 Jan 29 09:26:39 crc kubenswrapper[4771]: I0129 09:26:39.980213 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-log" containerID="cri-o://8b8985f775d4eed7bfbf7e3f49a61b541ef3819c64eed140a67e87b56711eaed" gracePeriod=30 Jan 29 09:26:40 crc kubenswrapper[4771]: I0129 09:26:40.571156 4771 generic.go:334] "Generic (PLEG): container finished" podID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerID="8b8985f775d4eed7bfbf7e3f49a61b541ef3819c64eed140a67e87b56711eaed" exitCode=143 Jan 29 09:26:40 crc kubenswrapper[4771]: I0129 09:26:40.571259 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4991753-f456-4f5d-8a34-6f440f82ad8f","Type":"ContainerDied","Data":"8b8985f775d4eed7bfbf7e3f49a61b541ef3819c64eed140a67e87b56711eaed"} Jan 29 09:26:40 crc kubenswrapper[4771]: I0129 09:26:40.602940 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d5dc7fbb8-8h9gn" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 09:26:40 crc kubenswrapper[4771]: I0129 09:26:40.603129 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.246290 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.394211 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data-custom\") pod \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.394677 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data\") pod \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.394917 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f3a9db6-57bc-4625-a2de-29c8ba725e10-logs\") pod \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.394976 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d8lq\" (UniqueName: \"kubernetes.io/projected/4f3a9db6-57bc-4625-a2de-29c8ba725e10-kube-api-access-7d8lq\") pod \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.395185 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-combined-ca-bundle\") pod \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\" (UID: \"4f3a9db6-57bc-4625-a2de-29c8ba725e10\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.400185 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f3a9db6-57bc-4625-a2de-29c8ba725e10-logs" (OuterVolumeSpecName: "logs") pod "4f3a9db6-57bc-4625-a2de-29c8ba725e10" (UID: "4f3a9db6-57bc-4625-a2de-29c8ba725e10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.400863 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f3a9db6-57bc-4625-a2de-29c8ba725e10-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.413813 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3a9db6-57bc-4625-a2de-29c8ba725e10-kube-api-access-7d8lq" (OuterVolumeSpecName: "kube-api-access-7d8lq") pod "4f3a9db6-57bc-4625-a2de-29c8ba725e10" (UID: "4f3a9db6-57bc-4625-a2de-29c8ba725e10"). InnerVolumeSpecName "kube-api-access-7d8lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.413928 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4f3a9db6-57bc-4625-a2de-29c8ba725e10" (UID: "4f3a9db6-57bc-4625-a2de-29c8ba725e10"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.454256 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f3a9db6-57bc-4625-a2de-29c8ba725e10" (UID: "4f3a9db6-57bc-4625-a2de-29c8ba725e10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.504950 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.504997 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.505008 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d8lq\" (UniqueName: \"kubernetes.io/projected/4f3a9db6-57bc-4625-a2de-29c8ba725e10-kube-api-access-7d8lq\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.535241 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data" (OuterVolumeSpecName: "config-data") pod "4f3a9db6-57bc-4625-a2de-29c8ba725e10" (UID: "4f3a9db6-57bc-4625-a2de-29c8ba725e10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.575965 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.598323 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8df94697f-2fq9x" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.598336 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df94697f-2fq9x" event={"ID":"4f3a9db6-57bc-4625-a2de-29c8ba725e10","Type":"ContainerDied","Data":"7ff5b076b0e5dae31f8b9e619ba6ef21414334d4336bb9282ae0cce3c55f90bf"} Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.598430 4771 scope.go:117] "RemoveContainer" containerID="354b820fb836e7e1773668337c4da3f7edbaea603e4e508869f3abe56328ac5b" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.611281 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f3a9db6-57bc-4625-a2de-29c8ba725e10-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.629230 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa7186d1-6aeb-4270-9762-ffb01552509e","Type":"ContainerDied","Data":"576a541490cd60fe3ca3a3b482d066134c376e4b459666e98e3717f7baff77b9"} Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.629468 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.650896 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-8df94697f-2fq9x"] Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.664821 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-8df94697f-2fq9x"] Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.689067 4771 scope.go:117] "RemoveContainer" containerID="3e56356839a5c9c5aed52851f8d3f997e127c9e013d5cd8adce03ec653ab7474" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.713465 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjbtl\" (UniqueName: \"kubernetes.io/projected/fa7186d1-6aeb-4270-9762-ffb01552509e-kube-api-access-cjbtl\") pod \"fa7186d1-6aeb-4270-9762-ffb01552509e\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.720756 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa7186d1-6aeb-4270-9762-ffb01552509e-kube-api-access-cjbtl" (OuterVolumeSpecName: "kube-api-access-cjbtl") pod "fa7186d1-6aeb-4270-9762-ffb01552509e" (UID: "fa7186d1-6aeb-4270-9762-ffb01552509e"). InnerVolumeSpecName "kube-api-access-cjbtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.720961 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-run-httpd\") pod \"fa7186d1-6aeb-4270-9762-ffb01552509e\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.721076 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-sg-core-conf-yaml\") pod \"fa7186d1-6aeb-4270-9762-ffb01552509e\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.721119 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-combined-ca-bundle\") pod \"fa7186d1-6aeb-4270-9762-ffb01552509e\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.721144 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-config-data\") pod \"fa7186d1-6aeb-4270-9762-ffb01552509e\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.721198 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-scripts\") pod \"fa7186d1-6aeb-4270-9762-ffb01552509e\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.721385 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-log-httpd\") pod \"fa7186d1-6aeb-4270-9762-ffb01552509e\" (UID: \"fa7186d1-6aeb-4270-9762-ffb01552509e\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.721429 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fa7186d1-6aeb-4270-9762-ffb01552509e" (UID: "fa7186d1-6aeb-4270-9762-ffb01552509e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.722225 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjbtl\" (UniqueName: \"kubernetes.io/projected/fa7186d1-6aeb-4270-9762-ffb01552509e-kube-api-access-cjbtl\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.722251 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.723725 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fa7186d1-6aeb-4270-9762-ffb01552509e" (UID: "fa7186d1-6aeb-4270-9762-ffb01552509e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.728178 4771 scope.go:117] "RemoveContainer" containerID="069432868da7d545b505c4e37161c7b68a2787282c3db982c333f58b95f3f12c" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.748616 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fa7186d1-6aeb-4270-9762-ffb01552509e" (UID: "fa7186d1-6aeb-4270-9762-ffb01552509e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.753128 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-scripts" (OuterVolumeSpecName: "scripts") pod "fa7186d1-6aeb-4270-9762-ffb01552509e" (UID: "fa7186d1-6aeb-4270-9762-ffb01552509e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.776202 4771 scope.go:117] "RemoveContainer" containerID="6c43a914f08e178e08a1b93baa148b01925bcac1d37f5afb326eb20d290f501a" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.828461 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa7186d1-6aeb-4270-9762-ffb01552509e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.828492 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.828507 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.854410 4771 scope.go:117] "RemoveContainer" containerID="2be2e345d47efe833e5fe59635838ecf5c65b8dcb0e513be33f3276f26ad9e7f" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.855014 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.865362 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa7186d1-6aeb-4270-9762-ffb01552509e" (UID: "fa7186d1-6aeb-4270-9762-ffb01552509e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.899113 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-config-data" (OuterVolumeSpecName: "config-data") pod "fa7186d1-6aeb-4270-9762-ffb01552509e" (UID: "fa7186d1-6aeb-4270-9762-ffb01552509e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.929864 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"af79ed5e-0bb8-4195-b0bb-658ef8106824\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.930120 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-httpd-run\") pod \"af79ed5e-0bb8-4195-b0bb-658ef8106824\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.930170 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-combined-ca-bundle\") pod \"af79ed5e-0bb8-4195-b0bb-658ef8106824\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.930213 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-logs\") pod \"af79ed5e-0bb8-4195-b0bb-658ef8106824\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.930309 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-public-tls-certs\") pod \"af79ed5e-0bb8-4195-b0bb-658ef8106824\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.930397 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khxlk\" (UniqueName: \"kubernetes.io/projected/af79ed5e-0bb8-4195-b0bb-658ef8106824-kube-api-access-khxlk\") pod \"af79ed5e-0bb8-4195-b0bb-658ef8106824\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.930445 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-config-data\") pod \"af79ed5e-0bb8-4195-b0bb-658ef8106824\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.930482 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-scripts\") pod \"af79ed5e-0bb8-4195-b0bb-658ef8106824\" (UID: \"af79ed5e-0bb8-4195-b0bb-658ef8106824\") " Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.931391 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.931425 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa7186d1-6aeb-4270-9762-ffb01552509e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.938759 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "af79ed5e-0bb8-4195-b0bb-658ef8106824" (UID: "af79ed5e-0bb8-4195-b0bb-658ef8106824"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.939613 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78b5bf9d6f-2454r" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.941102 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "af79ed5e-0bb8-4195-b0bb-658ef8106824" (UID: "af79ed5e-0bb8-4195-b0bb-658ef8106824"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.943049 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-logs" (OuterVolumeSpecName: "logs") pod "af79ed5e-0bb8-4195-b0bb-658ef8106824" (UID: "af79ed5e-0bb8-4195-b0bb-658ef8106824"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.952546 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-scripts" (OuterVolumeSpecName: "scripts") pod "af79ed5e-0bb8-4195-b0bb-658ef8106824" (UID: "af79ed5e-0bb8-4195-b0bb-658ef8106824"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:41 crc kubenswrapper[4771]: I0129 09:26:41.973610 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af79ed5e-0bb8-4195-b0bb-658ef8106824-kube-api-access-khxlk" (OuterVolumeSpecName: "kube-api-access-khxlk") pod "af79ed5e-0bb8-4195-b0bb-658ef8106824" (UID: "af79ed5e-0bb8-4195-b0bb-658ef8106824"). InnerVolumeSpecName "kube-api-access-khxlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.033358 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.033890 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.033905 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af79ed5e-0bb8-4195-b0bb-658ef8106824-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.033918 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khxlk\" (UniqueName: \"kubernetes.io/projected/af79ed5e-0bb8-4195-b0bb-658ef8106824-kube-api-access-khxlk\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.033933 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.035119 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af79ed5e-0bb8-4195-b0bb-658ef8106824" (UID: "af79ed5e-0bb8-4195-b0bb-658ef8106824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.057722 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-aa3d-account-create-update-s9v84"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.072370 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.088895 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b7b97bcf6-xn2wg"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.089159 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b7b97bcf6-xn2wg" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerName="neutron-api" containerID="cri-o://2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde" gracePeriod=30 Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.089315 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b7b97bcf6-xn2wg" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerName="neutron-httpd" containerID="cri-o://6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849" gracePeriod=30 Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.142134 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.142183 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.161997 4771 scope.go:117] "RemoveContainer" containerID="92ff20facf34cd36aeb944e32f6b48933e256d4378fc1559dc21b4cc498b966a" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.171991 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-config-data" (OuterVolumeSpecName: "config-data") pod "af79ed5e-0bb8-4195-b0bb-658ef8106824" (UID: "af79ed5e-0bb8-4195-b0bb-658ef8106824"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.183408 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "af79ed5e-0bb8-4195-b0bb-658ef8106824" (UID: "af79ed5e-0bb8-4195-b0bb-658ef8106824"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.195786 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.243646 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.243680 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af79ed5e-0bb8-4195-b0bb-658ef8106824-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.249835 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.275750 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:42 crc kubenswrapper[4771]: E0129 09:26:42.276231 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="proxy-httpd" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276246 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="proxy-httpd" Jan 29 09:26:42 crc kubenswrapper[4771]: E0129 09:26:42.276256 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="sg-core" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276262 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="sg-core" Jan 29 09:26:42 crc kubenswrapper[4771]: E0129 09:26:42.276277 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="ceilometer-notification-agent" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276282 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="ceilometer-notification-agent" Jan 29 09:26:42 crc kubenswrapper[4771]: E0129 09:26:42.276291 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerName="barbican-worker-log" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276298 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerName="barbican-worker-log" Jan 29 09:26:42 crc kubenswrapper[4771]: E0129 09:26:42.276306 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerName="glance-log" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276313 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerName="glance-log" Jan 29 09:26:42 crc kubenswrapper[4771]: E0129 09:26:42.276346 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="ceilometer-central-agent" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276352 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="ceilometer-central-agent" Jan 29 09:26:42 crc kubenswrapper[4771]: E0129 09:26:42.276380 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerName="barbican-worker" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276386 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerName="barbican-worker" Jan 29 09:26:42 crc kubenswrapper[4771]: E0129 09:26:42.276396 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerName="glance-httpd" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276402 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerName="glance-httpd" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276572 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="ceilometer-notification-agent" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276613 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerName="glance-log" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276624 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="sg-core" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276640 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="proxy-httpd" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276649 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" containerName="glance-httpd" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276661 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerName="barbican-worker-log" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276669 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" containerName="ceilometer-central-agent" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.276677 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" containerName="barbican-worker" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.278463 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.287077 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.287298 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.294448 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.504164 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2549-account-create-update-7mcrn"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.505377 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-run-httpd\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.505438 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgr6g\" (UniqueName: \"kubernetes.io/projected/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-kube-api-access-xgr6g\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.505566 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.505725 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.505826 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-scripts\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.505943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-config-data\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.506028 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-log-httpd\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.547272 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-frmrw"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.561440 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4tqtd"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.570644 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jq6c5"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.582425 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-edb4-account-create-update-n2gmm"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.608605 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.608872 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.608921 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-scripts\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.608963 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-config-data\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.608994 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-log-httpd\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.609067 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-run-httpd\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.609089 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgr6g\" (UniqueName: \"kubernetes.io/projected/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-kube-api-access-xgr6g\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.609666 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-log-httpd\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.609916 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-run-httpd\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.616337 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-scripts\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.617411 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.617411 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.624174 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-config-data\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.646851 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgr6g\" (UniqueName: \"kubernetes.io/projected/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-kube-api-access-xgr6g\") pod \"ceilometer-0\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.651044 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frmrw" event={"ID":"a56c69ee-17ef-4f36-b312-8a7ba10df44a","Type":"ContainerStarted","Data":"c2ec5ae42c344d9b269e09611521ef3464ba79ff75c197a76d5d0f13b33c9aba"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.669756 4771 generic.go:334] "Generic (PLEG): container finished" podID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerID="6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849" exitCode=0 Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.669857 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b97bcf6-xn2wg" event={"ID":"84e8bfd2-5035-43a0-80ee-c4ceed1d422c","Type":"ContainerDied","Data":"6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.671300 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2549-account-create-update-7mcrn" event={"ID":"dedabb0e-4487-468e-91d2-af4e8767a0d9","Type":"ContainerStarted","Data":"5ebc19bd074eaba42aae30775077b59b2c566132d67ff631bc9e952a9a4c9132"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.673496 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.681046 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-aa3d-account-create-update-s9v84" event={"ID":"5c01cd54-a15c-4367-8e1c-ee8cdc10373c","Type":"ContainerStarted","Data":"4ea2aabfb0aafeb151996dd2b11d5ef6da9fed5474cb8400065466d43c6cae1d"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.681093 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-aa3d-account-create-update-s9v84" event={"ID":"5c01cd54-a15c-4367-8e1c-ee8cdc10373c","Type":"ContainerStarted","Data":"7c078e35dd8bc65f1b7c1cffa432dab02d56a6c71a43ea61eb57c71ea66c5696"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.692491 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"af79ed5e-0bb8-4195-b0bb-658ef8106824","Type":"ContainerDied","Data":"8ec447cc190709358c7877948f027bc1d4d99b15eb34ed7d19daf33cab660f02"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.692548 4771 scope.go:117] "RemoveContainer" containerID="90f040df567e538cab9b20742f823d8c87533b8d82b17dce94f172e073431a1e" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.692754 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.709983 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0605e923-8ce6-4789-89f7-214d47422865","Type":"ContainerStarted","Data":"926e797fe530aaaee202eb06138f839cd0451000943779e14372dc13e63cd168"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.717150 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-aa3d-account-create-update-s9v84" podStartSLOduration=5.717125749 podStartE2EDuration="5.717125749s" podCreationTimestamp="2026-01-29 09:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:42.700577937 +0000 UTC m=+1222.823418164" watchObservedRunningTime="2026-01-29 09:26:42.717125749 +0000 UTC m=+1222.839965976" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.724672 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jq6c5" event={"ID":"7037e6e8-a6d3-417e-9a83-091fd1492909","Type":"ContainerStarted","Data":"21a39f174dfb9fc8857798b821b9dcd63fae4c17e7069b2ea751808acca7e73f"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.766817 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" event={"ID":"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9","Type":"ContainerStarted","Data":"218f091b2ed1b88318a5864f25bf3e90700e068f4a52f90f73b4bbab252ccca3"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.769967 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4tqtd" event={"ID":"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4","Type":"ContainerStarted","Data":"68d2342376e1db200e0d3d842ebf95d43e70344987444c524168299d7cd7a4a0"} Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.785374 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.611174179 podStartE2EDuration="14.785349633s" podCreationTimestamp="2026-01-29 09:26:28 +0000 UTC" firstStartedPulling="2026-01-29 09:26:28.90065346 +0000 UTC m=+1209.023493687" lastFinishedPulling="2026-01-29 09:26:41.074828914 +0000 UTC m=+1221.197669141" observedRunningTime="2026-01-29 09:26:42.729435955 +0000 UTC m=+1222.852276192" watchObservedRunningTime="2026-01-29 09:26:42.785349633 +0000 UTC m=+1222.908189860" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.806448 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.831490 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.933069 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f3a9db6-57bc-4625-a2de-29c8ba725e10" path="/var/lib/kubelet/pods/4f3a9db6-57bc-4625-a2de-29c8ba725e10/volumes" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.934104 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af79ed5e-0bb8-4195-b0bb-658ef8106824" path="/var/lib/kubelet/pods/af79ed5e-0bb8-4195-b0bb-658ef8106824/volumes" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.934843 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa7186d1-6aeb-4270-9762-ffb01552509e" path="/var/lib/kubelet/pods/fa7186d1-6aeb-4270-9762-ffb01552509e/volumes" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.936510 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.938403 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.938526 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.944307 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 09:26:42 crc kubenswrapper[4771]: I0129 09:26:42.945199 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.046601 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55bc4c6647-vmgxk" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.069813 4771 scope.go:117] "RemoveContainer" containerID="e46deb98db0755606370454eeb2e8aa8e1947e692c761c9eddfe2b0bae494804" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.138133 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-logs\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.138203 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-config-data\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.138227 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.138244 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.138287 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.138322 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twht4\" (UniqueName: \"kubernetes.io/projected/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-kube-api-access-twht4\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.138365 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.138438 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-scripts\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.172614 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:43554->10.217.0.152:9292: read: connection reset by peer" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.172884 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:43540->10.217.0.152:9292: read: connection reset by peer" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.240588 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-logs\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.240941 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-config-data\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.240973 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.240995 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.241065 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.241082 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twht4\" (UniqueName: \"kubernetes.io/projected/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-kube-api-access-twht4\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.241126 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.241204 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-scripts\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.245034 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.245355 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.245751 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-logs\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.254656 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-scripts\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.255245 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.263337 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.267025 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-config-data\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.280386 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twht4\" (UniqueName: \"kubernetes.io/projected/afe8ac11-63a3-4c6f-b46b-a8a79ba8e027-kube-api-access-twht4\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.320117 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027\") " pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.390756 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.408551 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.710524 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.794097 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" event={"ID":"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9","Type":"ContainerStarted","Data":"4e811faffbc5ff9f7e22015d649fddb381a32e1afc795b0fdfdf9d23fcc57ca6"} Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.806051 4771 generic.go:334] "Generic (PLEG): container finished" podID="7fdc5aac-5c6a-4ebd-95b6-34a876f299b4" containerID="c2cbe0f39a01d44990aab5ebaba7776d60b888e68d640ab5966658ec3444230e" exitCode=0 Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.806227 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4tqtd" event={"ID":"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4","Type":"ContainerDied","Data":"c2cbe0f39a01d44990aab5ebaba7776d60b888e68d640ab5966658ec3444230e"} Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.814669 4771 generic.go:334] "Generic (PLEG): container finished" podID="5c01cd54-a15c-4367-8e1c-ee8cdc10373c" containerID="4ea2aabfb0aafeb151996dd2b11d5ef6da9fed5474cb8400065466d43c6cae1d" exitCode=0 Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.814814 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-aa3d-account-create-update-s9v84" event={"ID":"5c01cd54-a15c-4367-8e1c-ee8cdc10373c","Type":"ContainerDied","Data":"4ea2aabfb0aafeb151996dd2b11d5ef6da9fed5474cb8400065466d43c6cae1d"} Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.821140 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerStarted","Data":"2e31b3ec7b55673c0f9b44f490bc19bcd40b4ffcbcf7ae91881aa341a06b8087"} Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.827075 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.827374 4771 generic.go:334] "Generic (PLEG): container finished" podID="dedabb0e-4487-468e-91d2-af4e8767a0d9" containerID="12c0ba4d183513e5f0a76674d005402c3aaa32d532b3a050e3f523edee225af6" exitCode=0 Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.827435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2549-account-create-update-7mcrn" event={"ID":"dedabb0e-4487-468e-91d2-af4e8767a0d9","Type":"ContainerDied","Data":"12c0ba4d183513e5f0a76674d005402c3aaa32d532b3a050e3f523edee225af6"} Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.828932 4771 generic.go:334] "Generic (PLEG): container finished" podID="a56c69ee-17ef-4f36-b312-8a7ba10df44a" containerID="1938a20f726df6da47c3e0b88eac6ef1060e2a20623496094e6f240d59e7c1fe" exitCode=0 Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.828986 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frmrw" event={"ID":"a56c69ee-17ef-4f36-b312-8a7ba10df44a","Type":"ContainerDied","Data":"1938a20f726df6da47c3e0b88eac6ef1060e2a20623496094e6f240d59e7c1fe"} Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.852193 4771 generic.go:334] "Generic (PLEG): container finished" podID="7037e6e8-a6d3-417e-9a83-091fd1492909" containerID="32e795fb5ebed137c7535b77ecf3169466bb6497a37947a16b61fd960a6a0981" exitCode=0 Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.852230 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jq6c5" event={"ID":"7037e6e8-a6d3-417e-9a83-091fd1492909","Type":"ContainerDied","Data":"32e795fb5ebed137c7535b77ecf3169466bb6497a37947a16b61fd960a6a0981"} Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.855898 4771 generic.go:334] "Generic (PLEG): container finished" podID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerID="9c353bc4595acbc17a7cc233454a1abcd95f0ccb2d10ac94976139ece910ef0c" exitCode=0 Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.857233 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.857441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4991753-f456-4f5d-8a34-6f440f82ad8f","Type":"ContainerDied","Data":"9c353bc4595acbc17a7cc233454a1abcd95f0ccb2d10ac94976139ece910ef0c"} Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.857485 4771 scope.go:117] "RemoveContainer" containerID="9c353bc4595acbc17a7cc233454a1abcd95f0ccb2d10ac94976139ece910ef0c" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.903550 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" podStartSLOduration=6.903521446 podStartE2EDuration="6.903521446s" podCreationTimestamp="2026-01-29 09:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:43.815744108 +0000 UTC m=+1223.938584345" watchObservedRunningTime="2026-01-29 09:26:43.903521446 +0000 UTC m=+1224.026361673" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.976749 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-httpd-run\") pod \"e4991753-f456-4f5d-8a34-6f440f82ad8f\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.978939 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e4991753-f456-4f5d-8a34-6f440f82ad8f" (UID: "e4991753-f456-4f5d-8a34-6f440f82ad8f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.979161 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e4991753-f456-4f5d-8a34-6f440f82ad8f\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.979335 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-internal-tls-certs\") pod \"e4991753-f456-4f5d-8a34-6f440f82ad8f\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.979458 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-logs\") pod \"e4991753-f456-4f5d-8a34-6f440f82ad8f\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.979555 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-scripts\") pod \"e4991753-f456-4f5d-8a34-6f440f82ad8f\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.979603 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6hwb\" (UniqueName: \"kubernetes.io/projected/e4991753-f456-4f5d-8a34-6f440f82ad8f-kube-api-access-q6hwb\") pod \"e4991753-f456-4f5d-8a34-6f440f82ad8f\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.979745 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-config-data\") pod \"e4991753-f456-4f5d-8a34-6f440f82ad8f\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.979809 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-combined-ca-bundle\") pod \"e4991753-f456-4f5d-8a34-6f440f82ad8f\" (UID: \"e4991753-f456-4f5d-8a34-6f440f82ad8f\") " Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.987668 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-logs" (OuterVolumeSpecName: "logs") pod "e4991753-f456-4f5d-8a34-6f440f82ad8f" (UID: "e4991753-f456-4f5d-8a34-6f440f82ad8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:43 crc kubenswrapper[4771]: I0129 09:26:43.987961 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.014990 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4991753-f456-4f5d-8a34-6f440f82ad8f-kube-api-access-q6hwb" (OuterVolumeSpecName: "kube-api-access-q6hwb") pod "e4991753-f456-4f5d-8a34-6f440f82ad8f" (UID: "e4991753-f456-4f5d-8a34-6f440f82ad8f"). InnerVolumeSpecName "kube-api-access-q6hwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.024645 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "e4991753-f456-4f5d-8a34-6f440f82ad8f" (UID: "e4991753-f456-4f5d-8a34-6f440f82ad8f"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.063135 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-scripts" (OuterVolumeSpecName: "scripts") pod "e4991753-f456-4f5d-8a34-6f440f82ad8f" (UID: "e4991753-f456-4f5d-8a34-6f440f82ad8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.091150 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.093231 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4991753-f456-4f5d-8a34-6f440f82ad8f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.093380 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.093446 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6hwb\" (UniqueName: \"kubernetes.io/projected/e4991753-f456-4f5d-8a34-6f440f82ad8f-kube-api-access-q6hwb\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.100815 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4991753-f456-4f5d-8a34-6f440f82ad8f" (UID: "e4991753-f456-4f5d-8a34-6f440f82ad8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.123933 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e4991753-f456-4f5d-8a34-6f440f82ad8f" (UID: "e4991753-f456-4f5d-8a34-6f440f82ad8f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.124836 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-config-data" (OuterVolumeSpecName: "config-data") pod "e4991753-f456-4f5d-8a34-6f440f82ad8f" (UID: "e4991753-f456-4f5d-8a34-6f440f82ad8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.191971 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.195601 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.195638 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.195649 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.195658 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4991753-f456-4f5d-8a34-6f440f82ad8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.195973 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:26:44 crc kubenswrapper[4771]: W0129 09:26:44.198851 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafe8ac11_63a3_4c6f_b46b_a8a79ba8e027.slice/crio-0a9b6d61640ded5085df37b8179db403ee9f55c331f49c8f5d6bae95f339ee27 WatchSource:0}: Error finding container 0a9b6d61640ded5085df37b8179db403ee9f55c331f49c8f5d6bae95f339ee27: Status 404 returned error can't find the container with id 0a9b6d61640ded5085df37b8179db403ee9f55c331f49c8f5d6bae95f339ee27 Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.271684 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.272155 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.278025 4771 scope.go:117] "RemoveContainer" containerID="8b8985f775d4eed7bfbf7e3f49a61b541ef3819c64eed140a67e87b56711eaed" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.287476 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.318771 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.342662 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:26:44 crc kubenswrapper[4771]: E0129 09:26:44.343399 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-httpd" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.343421 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-httpd" Jan 29 09:26:44 crc kubenswrapper[4771]: E0129 09:26:44.343482 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-log" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.343489 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-log" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.343743 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-log" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.343775 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" containerName="glance-httpd" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.345264 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.348482 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.348757 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.367926 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.504318 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b13b11-fc7b-4228-b762-27c0ae94ae33-logs\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.504418 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.507015 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmlk9\" (UniqueName: \"kubernetes.io/projected/d8b13b11-fc7b-4228-b762-27c0ae94ae33-kube-api-access-zmlk9\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.507084 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.507161 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.507301 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.507439 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.507519 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8b13b11-fc7b-4228-b762-27c0ae94ae33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.610585 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b13b11-fc7b-4228-b762-27c0ae94ae33-logs\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.610673 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.610740 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmlk9\" (UniqueName: \"kubernetes.io/projected/d8b13b11-fc7b-4228-b762-27c0ae94ae33-kube-api-access-zmlk9\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.610761 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.611504 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b13b11-fc7b-4228-b762-27c0ae94ae33-logs\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.611580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.611660 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.611737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.611774 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8b13b11-fc7b-4228-b762-27c0ae94ae33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.612383 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8b13b11-fc7b-4228-b762-27c0ae94ae33-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.615927 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.620251 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.621378 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.621418 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.622045 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b13b11-fc7b-4228-b762-27c0ae94ae33-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.632984 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmlk9\" (UniqueName: \"kubernetes.io/projected/d8b13b11-fc7b-4228-b762-27c0ae94ae33-kube-api-access-zmlk9\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.654466 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"d8b13b11-fc7b-4228-b762-27c0ae94ae33\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.673577 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.859805 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4991753-f456-4f5d-8a34-6f440f82ad8f" path="/var/lib/kubelet/pods/e4991753-f456-4f5d-8a34-6f440f82ad8f/volumes" Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.869082 4771 generic.go:334] "Generic (PLEG): container finished" podID="21ce4d22-6cc6-4f30-9b5a-3870f8268fe9" containerID="4e811faffbc5ff9f7e22015d649fddb381a32e1afc795b0fdfdf9d23fcc57ca6" exitCode=0 Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.869154 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" event={"ID":"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9","Type":"ContainerDied","Data":"4e811faffbc5ff9f7e22015d649fddb381a32e1afc795b0fdfdf9d23fcc57ca6"} Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.871480 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027","Type":"ContainerStarted","Data":"0a9b6d61640ded5085df37b8179db403ee9f55c331f49c8f5d6bae95f339ee27"} Jan 29 09:26:44 crc kubenswrapper[4771]: I0129 09:26:44.877798 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerStarted","Data":"73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26"} Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.328435 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:26:45 crc kubenswrapper[4771]: W0129 09:26:45.359882 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8b13b11_fc7b_4228_b762_27c0ae94ae33.slice/crio-89a9ea90190d76818a1981f56e9abd2b4b67da038ee5553895630d41c9c3188c WatchSource:0}: Error finding container 89a9ea90190d76818a1981f56e9abd2b4b67da038ee5553895630d41c9c3188c: Status 404 returned error can't find the container with id 89a9ea90190d76818a1981f56e9abd2b4b67da038ee5553895630d41c9c3188c Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.726597 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.771601 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.789640 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.790072 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.792788 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.861421 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwwdj\" (UniqueName: \"kubernetes.io/projected/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-kube-api-access-jwwdj\") pod \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\" (UID: \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.861660 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedabb0e-4487-468e-91d2-af4e8767a0d9-operator-scripts\") pod \"dedabb0e-4487-468e-91d2-af4e8767a0d9\" (UID: \"dedabb0e-4487-468e-91d2-af4e8767a0d9\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.861812 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wpj8\" (UniqueName: \"kubernetes.io/projected/dedabb0e-4487-468e-91d2-af4e8767a0d9-kube-api-access-8wpj8\") pod \"dedabb0e-4487-468e-91d2-af4e8767a0d9\" (UID: \"dedabb0e-4487-468e-91d2-af4e8767a0d9\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.861955 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-operator-scripts\") pod \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\" (UID: \"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.863926 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fdc5aac-5c6a-4ebd-95b6-34a876f299b4" (UID: "7fdc5aac-5c6a-4ebd-95b6-34a876f299b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.865455 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedabb0e-4487-468e-91d2-af4e8767a0d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dedabb0e-4487-468e-91d2-af4e8767a0d9" (UID: "dedabb0e-4487-468e-91d2-af4e8767a0d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.883526 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-kube-api-access-jwwdj" (OuterVolumeSpecName: "kube-api-access-jwwdj") pod "7fdc5aac-5c6a-4ebd-95b6-34a876f299b4" (UID: "7fdc5aac-5c6a-4ebd-95b6-34a876f299b4"). InnerVolumeSpecName "kube-api-access-jwwdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.910914 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedabb0e-4487-468e-91d2-af4e8767a0d9-kube-api-access-8wpj8" (OuterVolumeSpecName: "kube-api-access-8wpj8") pod "dedabb0e-4487-468e-91d2-af4e8767a0d9" (UID: "dedabb0e-4487-468e-91d2-af4e8767a0d9"). InnerVolumeSpecName "kube-api-access-8wpj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.981797 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-operator-scripts\") pod \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\" (UID: \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.981863 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwrdr\" (UniqueName: \"kubernetes.io/projected/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-kube-api-access-kwrdr\") pod \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\" (UID: \"5c01cd54-a15c-4367-8e1c-ee8cdc10373c\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.982008 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a56c69ee-17ef-4f36-b312-8a7ba10df44a-operator-scripts\") pod \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\" (UID: \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.982161 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7037e6e8-a6d3-417e-9a83-091fd1492909-operator-scripts\") pod \"7037e6e8-a6d3-417e-9a83-091fd1492909\" (UID: \"7037e6e8-a6d3-417e-9a83-091fd1492909\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.982259 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb4qv\" (UniqueName: \"kubernetes.io/projected/7037e6e8-a6d3-417e-9a83-091fd1492909-kube-api-access-tb4qv\") pod \"7037e6e8-a6d3-417e-9a83-091fd1492909\" (UID: \"7037e6e8-a6d3-417e-9a83-091fd1492909\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.982289 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkjsm\" (UniqueName: \"kubernetes.io/projected/a56c69ee-17ef-4f36-b312-8a7ba10df44a-kube-api-access-kkjsm\") pod \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\" (UID: \"a56c69ee-17ef-4f36-b312-8a7ba10df44a\") " Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.983282 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a56c69ee-17ef-4f36-b312-8a7ba10df44a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a56c69ee-17ef-4f36-b312-8a7ba10df44a" (UID: "a56c69ee-17ef-4f36-b312-8a7ba10df44a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.983670 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedabb0e-4487-468e-91d2-af4e8767a0d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.986763 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wpj8\" (UniqueName: \"kubernetes.io/projected/dedabb0e-4487-468e-91d2-af4e8767a0d9-kube-api-access-8wpj8\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.986886 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.987016 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwwdj\" (UniqueName: \"kubernetes.io/projected/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4-kube-api-access-jwwdj\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.983821 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7037e6e8-a6d3-417e-9a83-091fd1492909-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7037e6e8-a6d3-417e-9a83-091fd1492909" (UID: "7037e6e8-a6d3-417e-9a83-091fd1492909"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.985460 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerStarted","Data":"a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17"} Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.984254 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c01cd54-a15c-4367-8e1c-ee8cdc10373c" (UID: "5c01cd54-a15c-4367-8e1c-ee8cdc10373c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.989729 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-kube-api-access-kwrdr" (OuterVolumeSpecName: "kube-api-access-kwrdr") pod "5c01cd54-a15c-4367-8e1c-ee8cdc10373c" (UID: "5c01cd54-a15c-4367-8e1c-ee8cdc10373c"). InnerVolumeSpecName "kube-api-access-kwrdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.989928 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56c69ee-17ef-4f36-b312-8a7ba10df44a-kube-api-access-kkjsm" (OuterVolumeSpecName: "kube-api-access-kkjsm") pod "a56c69ee-17ef-4f36-b312-8a7ba10df44a" (UID: "a56c69ee-17ef-4f36-b312-8a7ba10df44a"). InnerVolumeSpecName "kube-api-access-kkjsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:45 crc kubenswrapper[4771]: I0129 09:26:45.998084 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7037e6e8-a6d3-417e-9a83-091fd1492909-kube-api-access-tb4qv" (OuterVolumeSpecName: "kube-api-access-tb4qv") pod "7037e6e8-a6d3-417e-9a83-091fd1492909" (UID: "7037e6e8-a6d3-417e-9a83-091fd1492909"). InnerVolumeSpecName "kube-api-access-tb4qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.004724 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frmrw" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.005001 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frmrw" event={"ID":"a56c69ee-17ef-4f36-b312-8a7ba10df44a","Type":"ContainerDied","Data":"c2ec5ae42c344d9b269e09611521ef3464ba79ff75c197a76d5d0f13b33c9aba"} Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.005035 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ec5ae42c344d9b269e09611521ef3464ba79ff75c197a76d5d0f13b33c9aba" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.039525 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jq6c5" event={"ID":"7037e6e8-a6d3-417e-9a83-091fd1492909","Type":"ContainerDied","Data":"21a39f174dfb9fc8857798b821b9dcd63fae4c17e7069b2ea751808acca7e73f"} Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.039920 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21a39f174dfb9fc8857798b821b9dcd63fae4c17e7069b2ea751808acca7e73f" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.040097 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jq6c5" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.060114 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d8b13b11-fc7b-4228-b762-27c0ae94ae33","Type":"ContainerStarted","Data":"89a9ea90190d76818a1981f56e9abd2b4b67da038ee5553895630d41c9c3188c"} Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.071357 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4tqtd" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.071339 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4tqtd" event={"ID":"7fdc5aac-5c6a-4ebd-95b6-34a876f299b4","Type":"ContainerDied","Data":"68d2342376e1db200e0d3d842ebf95d43e70344987444c524168299d7cd7a4a0"} Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.071750 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d2342376e1db200e0d3d842ebf95d43e70344987444c524168299d7cd7a4a0" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.079459 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2549-account-create-update-7mcrn" event={"ID":"dedabb0e-4487-468e-91d2-af4e8767a0d9","Type":"ContainerDied","Data":"5ebc19bd074eaba42aae30775077b59b2c566132d67ff631bc9e952a9a4c9132"} Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.079514 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ebc19bd074eaba42aae30775077b59b2c566132d67ff631bc9e952a9a4c9132" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.079626 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2549-account-create-update-7mcrn" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.098013 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.098065 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwrdr\" (UniqueName: \"kubernetes.io/projected/5c01cd54-a15c-4367-8e1c-ee8cdc10373c-kube-api-access-kwrdr\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.098086 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a56c69ee-17ef-4f36-b312-8a7ba10df44a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.098104 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7037e6e8-a6d3-417e-9a83-091fd1492909-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.098120 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb4qv\" (UniqueName: \"kubernetes.io/projected/7037e6e8-a6d3-417e-9a83-091fd1492909-kube-api-access-tb4qv\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.098135 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkjsm\" (UniqueName: \"kubernetes.io/projected/a56c69ee-17ef-4f36-b312-8a7ba10df44a-kube-api-access-kkjsm\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.105201 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-aa3d-account-create-update-s9v84" event={"ID":"5c01cd54-a15c-4367-8e1c-ee8cdc10373c","Type":"ContainerDied","Data":"7c078e35dd8bc65f1b7c1cffa432dab02d56a6c71a43ea61eb57c71ea66c5696"} Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.105257 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c078e35dd8bc65f1b7c1cffa432dab02d56a6c71a43ea61eb57c71ea66c5696" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.105231 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-aa3d-account-create-update-s9v84" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.114419 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027","Type":"ContainerStarted","Data":"4fab77781fe0d9a9f267b55bf33c770a1ddd98ae4827568c56fa0cf5dd6323d0"} Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.581351 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.720874 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d99s6\" (UniqueName: \"kubernetes.io/projected/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-kube-api-access-d99s6\") pod \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\" (UID: \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\") " Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.721377 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-operator-scripts\") pod \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\" (UID: \"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9\") " Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.722564 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21ce4d22-6cc6-4f30-9b5a-3870f8268fe9" (UID: "21ce4d22-6cc6-4f30-9b5a-3870f8268fe9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.729014 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-kube-api-access-d99s6" (OuterVolumeSpecName: "kube-api-access-d99s6") pod "21ce4d22-6cc6-4f30-9b5a-3870f8268fe9" (UID: "21ce4d22-6cc6-4f30-9b5a-3870f8268fe9"). InnerVolumeSpecName "kube-api-access-d99s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.824005 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:46 crc kubenswrapper[4771]: I0129 09:26:46.824049 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d99s6\" (UniqueName: \"kubernetes.io/projected/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9-kube-api-access-d99s6\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.002394 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.131234 4771 generic.go:334] "Generic (PLEG): container finished" podID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerID="2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde" exitCode=0 Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.131306 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b97bcf6-xn2wg" event={"ID":"84e8bfd2-5035-43a0-80ee-c4ceed1d422c","Type":"ContainerDied","Data":"2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde"} Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.131335 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b97bcf6-xn2wg" event={"ID":"84e8bfd2-5035-43a0-80ee-c4ceed1d422c","Type":"ContainerDied","Data":"4af07087216cf338d96e38fd0dec8d85b4983cc45f40b350feaef7cb9b0699cc"} Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.131352 4771 scope.go:117] "RemoveContainer" containerID="6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.131444 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7b97bcf6-xn2wg" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.132847 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-ovndb-tls-certs\") pod \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.133026 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-httpd-config\") pod \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.133069 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-config\") pod \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.133147 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x785r\" (UniqueName: \"kubernetes.io/projected/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-kube-api-access-x785r\") pod \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.133291 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-combined-ca-bundle\") pod \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\" (UID: \"84e8bfd2-5035-43a0-80ee-c4ceed1d422c\") " Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.139873 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-kube-api-access-x785r" (OuterVolumeSpecName: "kube-api-access-x785r") pod "84e8bfd2-5035-43a0-80ee-c4ceed1d422c" (UID: "84e8bfd2-5035-43a0-80ee-c4ceed1d422c"). InnerVolumeSpecName "kube-api-access-x785r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.155647 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "84e8bfd2-5035-43a0-80ee-c4ceed1d422c" (UID: "84e8bfd2-5035-43a0-80ee-c4ceed1d422c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.158055 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afe8ac11-63a3-4c6f-b46b-a8a79ba8e027","Type":"ContainerStarted","Data":"2c8dd277980b74b058ccab41a6d5d30620622e47edc7689a48456578f1d76a14"} Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.164507 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerStarted","Data":"783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb"} Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.186305 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" event={"ID":"21ce4d22-6cc6-4f30-9b5a-3870f8268fe9","Type":"ContainerDied","Data":"218f091b2ed1b88318a5864f25bf3e90700e068f4a52f90f73b4bbab252ccca3"} Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.186355 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="218f091b2ed1b88318a5864f25bf3e90700e068f4a52f90f73b4bbab252ccca3" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.186428 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-edb4-account-create-update-n2gmm" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.208421 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d8b13b11-fc7b-4228-b762-27c0ae94ae33","Type":"ContainerStarted","Data":"aa4e230a3985025e765bf77c79df90ba6296a2e2284ecec6699f854619ab4d94"} Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.226796 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.226770902 podStartE2EDuration="5.226770902s" podCreationTimestamp="2026-01-29 09:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:47.191578581 +0000 UTC m=+1227.314418818" watchObservedRunningTime="2026-01-29 09:26:47.226770902 +0000 UTC m=+1227.349611129" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.232181 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-config" (OuterVolumeSpecName: "config") pod "84e8bfd2-5035-43a0-80ee-c4ceed1d422c" (UID: "84e8bfd2-5035-43a0-80ee-c4ceed1d422c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.236761 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.237759 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.237864 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x785r\" (UniqueName: \"kubernetes.io/projected/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-kube-api-access-x785r\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.274427 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84e8bfd2-5035-43a0-80ee-c4ceed1d422c" (UID: "84e8bfd2-5035-43a0-80ee-c4ceed1d422c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.312893 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "84e8bfd2-5035-43a0-80ee-c4ceed1d422c" (UID: "84e8bfd2-5035-43a0-80ee-c4ceed1d422c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.338775 4771 scope.go:117] "RemoveContainer" containerID="2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.342105 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.342247 4771 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84e8bfd2-5035-43a0-80ee-c4ceed1d422c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.372416 4771 scope.go:117] "RemoveContainer" containerID="6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849" Jan 29 09:26:47 crc kubenswrapper[4771]: E0129 09:26:47.374193 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849\": container with ID starting with 6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849 not found: ID does not exist" containerID="6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.374353 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849"} err="failed to get container status \"6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849\": rpc error: code = NotFound desc = could not find container \"6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849\": container with ID starting with 6cc1ba029daca6c9bcc7db4ef699d1d0a2f2c19a09c152a803217754d1f0f849 not found: ID does not exist" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.374478 4771 scope.go:117] "RemoveContainer" containerID="2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde" Jan 29 09:26:47 crc kubenswrapper[4771]: E0129 09:26:47.375208 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde\": container with ID starting with 2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde not found: ID does not exist" containerID="2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.375310 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde"} err="failed to get container status \"2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde\": rpc error: code = NotFound desc = could not find container \"2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde\": container with ID starting with 2042980027dfc079bb45ed4d87e355f4725f5c4c7f4b8a078be1cc1b8ac52fde not found: ID does not exist" Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.471506 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b7b97bcf6-xn2wg"] Jan 29 09:26:47 crc kubenswrapper[4771]: I0129 09:26:47.484989 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b7b97bcf6-xn2wg"] Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.281830 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d8b13b11-fc7b-4228-b762-27c0ae94ae33","Type":"ContainerStarted","Data":"d96ef8235cc447c499539b21afa7b89982d7407a59e1a68d267bba282daad7f4"} Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.361732 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vlnn"] Jan 29 09:26:48 crc kubenswrapper[4771]: E0129 09:26:48.362780 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedabb0e-4487-468e-91d2-af4e8767a0d9" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.362805 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedabb0e-4487-468e-91d2-af4e8767a0d9" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: E0129 09:26:48.362817 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ce4d22-6cc6-4f30-9b5a-3870f8268fe9" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.362826 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ce4d22-6cc6-4f30-9b5a-3870f8268fe9" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: E0129 09:26:48.362855 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c01cd54-a15c-4367-8e1c-ee8cdc10373c" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.362866 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c01cd54-a15c-4367-8e1c-ee8cdc10373c" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: E0129 09:26:48.362898 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56c69ee-17ef-4f36-b312-8a7ba10df44a" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.362907 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56c69ee-17ef-4f36-b312-8a7ba10df44a" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: E0129 09:26:48.362927 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdc5aac-5c6a-4ebd-95b6-34a876f299b4" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.362935 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdc5aac-5c6a-4ebd-95b6-34a876f299b4" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: E0129 09:26:48.362952 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7037e6e8-a6d3-417e-9a83-091fd1492909" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.362960 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7037e6e8-a6d3-417e-9a83-091fd1492909" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: E0129 09:26:48.362991 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerName="neutron-httpd" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363000 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerName="neutron-httpd" Jan 29 09:26:48 crc kubenswrapper[4771]: E0129 09:26:48.363011 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerName="neutron-api" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363019 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerName="neutron-api" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363456 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerName="neutron-api" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363479 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedabb0e-4487-468e-91d2-af4e8767a0d9" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363490 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdc5aac-5c6a-4ebd-95b6-34a876f299b4" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363527 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" containerName="neutron-httpd" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363551 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ce4d22-6cc6-4f30-9b5a-3870f8268fe9" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363568 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7037e6e8-a6d3-417e-9a83-091fd1492909" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363597 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56c69ee-17ef-4f36-b312-8a7ba10df44a" containerName="mariadb-database-create" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.363618 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c01cd54-a15c-4367-8e1c-ee8cdc10373c" containerName="mariadb-account-create-update" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.374044 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.387065 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.388324 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-frbg2" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.388576 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.388776 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vlnn"] Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.411759 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.411726921 podStartE2EDuration="4.411726921s" podCreationTimestamp="2026-01-29 09:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:26:48.370145675 +0000 UTC m=+1228.492985902" watchObservedRunningTime="2026-01-29 09:26:48.411726921 +0000 UTC m=+1228.534567148" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.473497 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.473786 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-scripts\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.474001 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rh2d\" (UniqueName: \"kubernetes.io/projected/79d3d46b-96ba-460a-a066-52c0a041d34f-kube-api-access-7rh2d\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.474148 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-config-data\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.576325 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.576396 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-scripts\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.576444 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rh2d\" (UniqueName: \"kubernetes.io/projected/79d3d46b-96ba-460a-a066-52c0a041d34f-kube-api-access-7rh2d\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.576474 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-config-data\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.587408 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-scripts\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.587453 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-config-data\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.587957 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.598399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rh2d\" (UniqueName: \"kubernetes.io/projected/79d3d46b-96ba-460a-a066-52c0a041d34f-kube-api-access-7rh2d\") pod \"nova-cell0-conductor-db-sync-4vlnn\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.774112 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:26:48 crc kubenswrapper[4771]: I0129 09:26:48.852360 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e8bfd2-5035-43a0-80ee-c4ceed1d422c" path="/var/lib/kubelet/pods/84e8bfd2-5035-43a0-80ee-c4ceed1d422c/volumes" Jan 29 09:26:49 crc kubenswrapper[4771]: I0129 09:26:49.284210 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vlnn"] Jan 29 09:26:49 crc kubenswrapper[4771]: W0129 09:26:49.285288 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d3d46b_96ba_460a_a066_52c0a041d34f.slice/crio-d95b8071b34b8c79eb79741d0f5dd607c6f1d2278c1235410dd3df0528a536f7 WatchSource:0}: Error finding container d95b8071b34b8c79eb79741d0f5dd607c6f1d2278c1235410dd3df0528a536f7: Status 404 returned error can't find the container with id d95b8071b34b8c79eb79741d0f5dd607c6f1d2278c1235410dd3df0528a536f7 Jan 29 09:26:49 crc kubenswrapper[4771]: I0129 09:26:49.301593 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerStarted","Data":"06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e"} Jan 29 09:26:49 crc kubenswrapper[4771]: I0129 09:26:49.301772 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:26:49 crc kubenswrapper[4771]: I0129 09:26:49.301737 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="ceilometer-central-agent" containerID="cri-o://73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26" gracePeriod=30 Jan 29 09:26:49 crc kubenswrapper[4771]: I0129 09:26:49.301852 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="proxy-httpd" containerID="cri-o://06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e" gracePeriod=30 Jan 29 09:26:49 crc kubenswrapper[4771]: I0129 09:26:49.301854 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="sg-core" containerID="cri-o://783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb" gracePeriod=30 Jan 29 09:26:49 crc kubenswrapper[4771]: I0129 09:26:49.301994 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="ceilometer-notification-agent" containerID="cri-o://a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17" gracePeriod=30 Jan 29 09:26:49 crc kubenswrapper[4771]: I0129 09:26:49.307205 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" event={"ID":"79d3d46b-96ba-460a-a066-52c0a041d34f","Type":"ContainerStarted","Data":"d95b8071b34b8c79eb79741d0f5dd607c6f1d2278c1235410dd3df0528a536f7"} Jan 29 09:26:50 crc kubenswrapper[4771]: I0129 09:26:50.283281 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:50 crc kubenswrapper[4771]: I0129 09:26:50.323994 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.848526273 podStartE2EDuration="8.323968863s" podCreationTimestamp="2026-01-29 09:26:42 +0000 UTC" firstStartedPulling="2026-01-29 09:26:43.704400295 +0000 UTC m=+1223.827240522" lastFinishedPulling="2026-01-29 09:26:48.179842885 +0000 UTC m=+1228.302683112" observedRunningTime="2026-01-29 09:26:49.335522004 +0000 UTC m=+1229.458362231" watchObservedRunningTime="2026-01-29 09:26:50.323968863 +0000 UTC m=+1230.446809090" Jan 29 09:26:50 crc kubenswrapper[4771]: I0129 09:26:50.327362 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b66bb6fb-89w2j" Jan 29 09:26:50 crc kubenswrapper[4771]: I0129 09:26:50.417871 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-659d957f7d-m6f6g"] Jan 29 09:26:50 crc kubenswrapper[4771]: I0129 09:26:50.418176 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-659d957f7d-m6f6g" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerName="placement-log" containerID="cri-o://e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d" gracePeriod=30 Jan 29 09:26:50 crc kubenswrapper[4771]: I0129 09:26:50.418748 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-659d957f7d-m6f6g" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerName="placement-api" containerID="cri-o://5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f" gracePeriod=30 Jan 29 09:26:50 crc kubenswrapper[4771]: I0129 09:26:50.602095 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d5dc7fbb8-8h9gn" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.336848 4771 generic.go:334] "Generic (PLEG): container finished" podID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerID="e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d" exitCode=143 Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.336946 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659d957f7d-m6f6g" event={"ID":"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77","Type":"ContainerDied","Data":"e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d"} Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.342404 4771 generic.go:334] "Generic (PLEG): container finished" podID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerID="06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e" exitCode=0 Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.342438 4771 generic.go:334] "Generic (PLEG): container finished" podID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerID="783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb" exitCode=2 Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.342447 4771 generic.go:334] "Generic (PLEG): container finished" podID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerID="a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17" exitCode=0 Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.342460 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerDied","Data":"06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e"} Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.342619 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerDied","Data":"783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb"} Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.342638 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerDied","Data":"a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17"} Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.347067 4771 generic.go:334] "Generic (PLEG): container finished" podID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerID="f44e7228793b3cf23069b72b399157c4fe605e6acbd5233674966340d89bd2a5" exitCode=137 Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.347120 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5dc7fbb8-8h9gn" event={"ID":"55cedb34-7c52-47ec-8f60-5d3e362f5948","Type":"ContainerDied","Data":"f44e7228793b3cf23069b72b399157c4fe605e6acbd5233674966340d89bd2a5"} Jan 29 09:26:51 crc kubenswrapper[4771]: I0129 09:26:51.938977 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.059315 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-tls-certs\") pod \"55cedb34-7c52-47ec-8f60-5d3e362f5948\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.059430 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-scripts\") pod \"55cedb34-7c52-47ec-8f60-5d3e362f5948\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.059477 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cedb34-7c52-47ec-8f60-5d3e362f5948-logs\") pod \"55cedb34-7c52-47ec-8f60-5d3e362f5948\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.059503 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbz4g\" (UniqueName: \"kubernetes.io/projected/55cedb34-7c52-47ec-8f60-5d3e362f5948-kube-api-access-gbz4g\") pod \"55cedb34-7c52-47ec-8f60-5d3e362f5948\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.059592 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-combined-ca-bundle\") pod \"55cedb34-7c52-47ec-8f60-5d3e362f5948\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.059616 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-secret-key\") pod \"55cedb34-7c52-47ec-8f60-5d3e362f5948\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.059652 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-config-data\") pod \"55cedb34-7c52-47ec-8f60-5d3e362f5948\" (UID: \"55cedb34-7c52-47ec-8f60-5d3e362f5948\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.062908 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55cedb34-7c52-47ec-8f60-5d3e362f5948-logs" (OuterVolumeSpecName: "logs") pod "55cedb34-7c52-47ec-8f60-5d3e362f5948" (UID: "55cedb34-7c52-47ec-8f60-5d3e362f5948"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.077904 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55cedb34-7c52-47ec-8f60-5d3e362f5948-kube-api-access-gbz4g" (OuterVolumeSpecName: "kube-api-access-gbz4g") pod "55cedb34-7c52-47ec-8f60-5d3e362f5948" (UID: "55cedb34-7c52-47ec-8f60-5d3e362f5948"). InnerVolumeSpecName "kube-api-access-gbz4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.078874 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "55cedb34-7c52-47ec-8f60-5d3e362f5948" (UID: "55cedb34-7c52-47ec-8f60-5d3e362f5948"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.104651 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-scripts" (OuterVolumeSpecName: "scripts") pod "55cedb34-7c52-47ec-8f60-5d3e362f5948" (UID: "55cedb34-7c52-47ec-8f60-5d3e362f5948"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.122142 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55cedb34-7c52-47ec-8f60-5d3e362f5948" (UID: "55cedb34-7c52-47ec-8f60-5d3e362f5948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.164236 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.164651 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cedb34-7c52-47ec-8f60-5d3e362f5948-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.164661 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbz4g\" (UniqueName: \"kubernetes.io/projected/55cedb34-7c52-47ec-8f60-5d3e362f5948-kube-api-access-gbz4g\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.164676 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.164688 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.169256 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-config-data" (OuterVolumeSpecName: "config-data") pod "55cedb34-7c52-47ec-8f60-5d3e362f5948" (UID: "55cedb34-7c52-47ec-8f60-5d3e362f5948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.187795 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "55cedb34-7c52-47ec-8f60-5d3e362f5948" (UID: "55cedb34-7c52-47ec-8f60-5d3e362f5948"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.219540 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.265872 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-combined-ca-bundle\") pod \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.265981 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-log-httpd\") pod \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.266133 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-run-httpd\") pod \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.266200 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-sg-core-conf-yaml\") pod \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.266318 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-scripts\") pod \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.266361 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-config-data\") pod \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.266396 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgr6g\" (UniqueName: \"kubernetes.io/projected/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-kube-api-access-xgr6g\") pod \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\" (UID: \"fe6c5cd8-c65b-4832-a589-462c4cae7c2d\") " Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.266905 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55cedb34-7c52-47ec-8f60-5d3e362f5948-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.266925 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55cedb34-7c52-47ec-8f60-5d3e362f5948-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.266896 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe6c5cd8-c65b-4832-a589-462c4cae7c2d" (UID: "fe6c5cd8-c65b-4832-a589-462c4cae7c2d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.267386 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe6c5cd8-c65b-4832-a589-462c4cae7c2d" (UID: "fe6c5cd8-c65b-4832-a589-462c4cae7c2d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.274352 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-scripts" (OuterVolumeSpecName: "scripts") pod "fe6c5cd8-c65b-4832-a589-462c4cae7c2d" (UID: "fe6c5cd8-c65b-4832-a589-462c4cae7c2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.276017 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-kube-api-access-xgr6g" (OuterVolumeSpecName: "kube-api-access-xgr6g") pod "fe6c5cd8-c65b-4832-a589-462c4cae7c2d" (UID: "fe6c5cd8-c65b-4832-a589-462c4cae7c2d"). InnerVolumeSpecName "kube-api-access-xgr6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.328363 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe6c5cd8-c65b-4832-a589-462c4cae7c2d" (UID: "fe6c5cd8-c65b-4832-a589-462c4cae7c2d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.369004 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.369041 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.369051 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.369064 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.369076 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgr6g\" (UniqueName: \"kubernetes.io/projected/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-kube-api-access-xgr6g\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.375543 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.375420 4771 generic.go:334] "Generic (PLEG): container finished" podID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerID="73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26" exitCode=0 Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.375933 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerDied","Data":"73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26"} Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.375993 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe6c5cd8-c65b-4832-a589-462c4cae7c2d","Type":"ContainerDied","Data":"2e31b3ec7b55673c0f9b44f490bc19bcd40b4ffcbcf7ae91881aa341a06b8087"} Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.376012 4771 scope.go:117] "RemoveContainer" containerID="06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.381402 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5dc7fbb8-8h9gn" event={"ID":"55cedb34-7c52-47ec-8f60-5d3e362f5948","Type":"ContainerDied","Data":"c5950547eca69982dd0e9e9814cffb226399c8a4a328d5527b65ecfa56a30944"} Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.381523 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d5dc7fbb8-8h9gn" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.396428 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe6c5cd8-c65b-4832-a589-462c4cae7c2d" (UID: "fe6c5cd8-c65b-4832-a589-462c4cae7c2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.415194 4771 scope.go:117] "RemoveContainer" containerID="783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.422670 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-config-data" (OuterVolumeSpecName: "config-data") pod "fe6c5cd8-c65b-4832-a589-462c4cae7c2d" (UID: "fe6c5cd8-c65b-4832-a589-462c4cae7c2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.436359 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d5dc7fbb8-8h9gn"] Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.445194 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d5dc7fbb8-8h9gn"] Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.447203 4771 scope.go:117] "RemoveContainer" containerID="a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.469175 4771 scope.go:117] "RemoveContainer" containerID="73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.470585 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.470616 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe6c5cd8-c65b-4832-a589-462c4cae7c2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.492140 4771 scope.go:117] "RemoveContainer" containerID="06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.492604 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e\": container with ID starting with 06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e not found: ID does not exist" containerID="06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.492640 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e"} err="failed to get container status \"06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e\": rpc error: code = NotFound desc = could not find container \"06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e\": container with ID starting with 06d1a7f3891a57e30f33c418fbd2d78d416f26e1c2cf7841588541788db8f19e not found: ID does not exist" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.492676 4771 scope.go:117] "RemoveContainer" containerID="783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.493174 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb\": container with ID starting with 783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb not found: ID does not exist" containerID="783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.493206 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb"} err="failed to get container status \"783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb\": rpc error: code = NotFound desc = could not find container \"783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb\": container with ID starting with 783f343f4297fb01f8768860b7ac9e3a4e9f5b078df522dca98322cfc46c49fb not found: ID does not exist" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.493226 4771 scope.go:117] "RemoveContainer" containerID="a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.494155 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17\": container with ID starting with a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17 not found: ID does not exist" containerID="a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.494229 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17"} err="failed to get container status \"a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17\": rpc error: code = NotFound desc = could not find container \"a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17\": container with ID starting with a64d9ea3bde87de508f2fffe6b05d7ebea4317b6bdba9fc4f34e01135d6b9b17 not found: ID does not exist" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.494252 4771 scope.go:117] "RemoveContainer" containerID="73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.494540 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26\": container with ID starting with 73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26 not found: ID does not exist" containerID="73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.494569 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26"} err="failed to get container status \"73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26\": rpc error: code = NotFound desc = could not find container \"73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26\": container with ID starting with 73ac28219d18d846d1a0a909feaab47b7471c849449dcf9db5b616b8a1fe1f26 not found: ID does not exist" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.494587 4771 scope.go:117] "RemoveContainer" containerID="6a6bb5208ac980078fcfd13dcef8211355367718988a765859e0cf35d85c777b" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.675499 4771 scope.go:117] "RemoveContainer" containerID="f44e7228793b3cf23069b72b399157c4fe605e6acbd5233674966340d89bd2a5" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.730283 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.741120 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.782793 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.783282 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon-log" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783304 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon-log" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.783320 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783329 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.783353 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="ceilometer-central-agent" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783360 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="ceilometer-central-agent" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.783371 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="ceilometer-notification-agent" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783378 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="ceilometer-notification-agent" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.783414 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="proxy-httpd" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783423 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="proxy-httpd" Jan 29 09:26:52 crc kubenswrapper[4771]: E0129 09:26:52.783440 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="sg-core" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783448 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="sg-core" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783647 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783662 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="ceilometer-notification-agent" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783671 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="ceilometer-central-agent" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783687 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" containerName="horizon-log" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783716 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="sg-core" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.783731 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" containerName="proxy-httpd" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.786173 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.793517 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.794221 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.801851 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.853081 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55cedb34-7c52-47ec-8f60-5d3e362f5948" path="/var/lib/kubelet/pods/55cedb34-7c52-47ec-8f60-5d3e362f5948/volumes" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.854685 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe6c5cd8-c65b-4832-a589-462c4cae7c2d" path="/var/lib/kubelet/pods/fe6c5cd8-c65b-4832-a589-462c4cae7c2d/volumes" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.878630 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-scripts\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.878797 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-config-data\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.878837 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bs82\" (UniqueName: \"kubernetes.io/projected/cd4d49d2-fcc6-4706-a6b3-6c910995027f-kube-api-access-2bs82\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.878908 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-run-httpd\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.879053 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.879100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.879165 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-log-httpd\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.980827 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-scripts\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.980912 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-config-data\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.980953 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bs82\" (UniqueName: \"kubernetes.io/projected/cd4d49d2-fcc6-4706-a6b3-6c910995027f-kube-api-access-2bs82\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.980998 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-run-httpd\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.981028 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.981063 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.981122 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-log-httpd\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.981598 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-log-httpd\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.982049 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-run-httpd\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.987800 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.988043 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-scripts\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.988338 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-config-data\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:52 crc kubenswrapper[4771]: I0129 09:26:52.988590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:53 crc kubenswrapper[4771]: I0129 09:26:53.006114 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bs82\" (UniqueName: \"kubernetes.io/projected/cd4d49d2-fcc6-4706-a6b3-6c910995027f-kube-api-access-2bs82\") pod \"ceilometer-0\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " pod="openstack/ceilometer-0" Jan 29 09:26:53 crc kubenswrapper[4771]: I0129 09:26:53.148435 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:26:53 crc kubenswrapper[4771]: I0129 09:26:53.409719 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:26:53 crc kubenswrapper[4771]: I0129 09:26:53.410139 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:26:53 crc kubenswrapper[4771]: I0129 09:26:53.457854 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:26:53 crc kubenswrapper[4771]: I0129 09:26:53.474297 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:26:53 crc kubenswrapper[4771]: I0129 09:26:53.683719 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.026290 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.113846 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-public-tls-certs\") pod \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.114004 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-scripts\") pod \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.114108 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-logs\") pod \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.114148 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-combined-ca-bundle\") pod \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.114242 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khpvc\" (UniqueName: \"kubernetes.io/projected/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-kube-api-access-khpvc\") pod \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.114356 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-config-data\") pod \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.114463 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-logs" (OuterVolumeSpecName: "logs") pod "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" (UID: "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.114502 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-internal-tls-certs\") pod \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\" (UID: \"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77\") " Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.115113 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.129845 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-scripts" (OuterVolumeSpecName: "scripts") pod "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" (UID: "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.133406 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-kube-api-access-khpvc" (OuterVolumeSpecName: "kube-api-access-khpvc") pod "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" (UID: "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77"). InnerVolumeSpecName "kube-api-access-khpvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.174475 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" (UID: "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.189456 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-config-data" (OuterVolumeSpecName: "config-data") pod "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" (UID: "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.217112 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khpvc\" (UniqueName: \"kubernetes.io/projected/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-kube-api-access-khpvc\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.217141 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.217156 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.217167 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.230111 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" (UID: "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.230803 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" (UID: "53ce829b-d48f-4d9e-b6ed-3a076d0e6f77"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.318954 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.319000 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.430578 4771 generic.go:334] "Generic (PLEG): container finished" podID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerID="5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f" exitCode=0 Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.430656 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659d957f7d-m6f6g" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.430686 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659d957f7d-m6f6g" event={"ID":"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77","Type":"ContainerDied","Data":"5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f"} Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.431082 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659d957f7d-m6f6g" event={"ID":"53ce829b-d48f-4d9e-b6ed-3a076d0e6f77","Type":"ContainerDied","Data":"f0a128a280668b57029df2336b01457170cb923b0059fb6062210f2c20f66b9c"} Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.431109 4771 scope.go:117] "RemoveContainer" containerID="5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.434366 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerStarted","Data":"7579f22aeb82a2cf65ce36ee20b65fa12e486962c13cc195a279c18718ccd64b"} Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.435363 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.435393 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.458575 4771 scope.go:117] "RemoveContainer" containerID="e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.479571 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-659d957f7d-m6f6g"] Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.489809 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-659d957f7d-m6f6g"] Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.503376 4771 scope.go:117] "RemoveContainer" containerID="5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f" Jan 29 09:26:54 crc kubenswrapper[4771]: E0129 09:26:54.503948 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f\": container with ID starting with 5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f not found: ID does not exist" containerID="5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.503996 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f"} err="failed to get container status \"5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f\": rpc error: code = NotFound desc = could not find container \"5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f\": container with ID starting with 5a05e4fa270878b946d14645828cc965aa7e9ee228a92450f4b3d0ce1e42339f not found: ID does not exist" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.504027 4771 scope.go:117] "RemoveContainer" containerID="e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d" Jan 29 09:26:54 crc kubenswrapper[4771]: E0129 09:26:54.505590 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d\": container with ID starting with e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d not found: ID does not exist" containerID="e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.505672 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d"} err="failed to get container status \"e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d\": rpc error: code = NotFound desc = could not find container \"e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d\": container with ID starting with e090aa9017dfb91c94292d64b9c702a9cf313526bffffdbab5b915ee298dbb5d not found: ID does not exist" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.673866 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.673959 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.713205 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.729183 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:54 crc kubenswrapper[4771]: I0129 09:26:54.848503 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" path="/var/lib/kubelet/pods/53ce829b-d48f-4d9e-b6ed-3a076d0e6f77/volumes" Jan 29 09:26:55 crc kubenswrapper[4771]: I0129 09:26:55.445760 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:55 crc kubenswrapper[4771]: I0129 09:26:55.446175 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:56 crc kubenswrapper[4771]: I0129 09:26:56.489759 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:26:56 crc kubenswrapper[4771]: I0129 09:26:56.489919 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:26:56 crc kubenswrapper[4771]: I0129 09:26:56.493272 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:26:57 crc kubenswrapper[4771]: I0129 09:26:57.509782 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:26:57 crc kubenswrapper[4771]: I0129 09:26:57.510171 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:26:57 crc kubenswrapper[4771]: I0129 09:26:57.524705 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:27:01 crc kubenswrapper[4771]: I0129 09:27:01.544887 4771 patch_prober.go:28] interesting pod/router-default-5444994796-95rng container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 09:27:01 crc kubenswrapper[4771]: I0129 09:27:01.545561 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-95rng" podUID="72d00e01-7d77-4404-ab20-dcccc7764b69" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:27:03 crc kubenswrapper[4771]: I0129 09:27:03.532638 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerStarted","Data":"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591"} Jan 29 09:27:04 crc kubenswrapper[4771]: I0129 09:27:04.638719 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:07 crc kubenswrapper[4771]: E0129 09:27:07.827982 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 29 09:27:07 crc kubenswrapper[4771]: E0129 09:27:07.828626 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rh2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-4vlnn_openstack(79d3d46b-96ba-460a-a066-52c0a041d34f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:27:07 crc kubenswrapper[4771]: E0129 09:27:07.829730 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" podUID="79d3d46b-96ba-460a-a066-52c0a041d34f" Jan 29 09:27:08 crc kubenswrapper[4771]: I0129 09:27:08.589302 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerStarted","Data":"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9"} Jan 29 09:27:08 crc kubenswrapper[4771]: E0129 09:27:08.591112 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" podUID="79d3d46b-96ba-460a-a066-52c0a041d34f" Jan 29 09:27:09 crc kubenswrapper[4771]: I0129 09:27:09.600412 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerStarted","Data":"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3"} Jan 29 09:27:12 crc kubenswrapper[4771]: I0129 09:27:12.633057 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerStarted","Data":"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6"} Jan 29 09:27:12 crc kubenswrapper[4771]: I0129 09:27:12.633825 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:27:12 crc kubenswrapper[4771]: I0129 09:27:12.633509 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="ceilometer-central-agent" containerID="cri-o://0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591" gracePeriod=30 Jan 29 09:27:12 crc kubenswrapper[4771]: I0129 09:27:12.633520 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="proxy-httpd" containerID="cri-o://371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6" gracePeriod=30 Jan 29 09:27:12 crc kubenswrapper[4771]: I0129 09:27:12.633597 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="ceilometer-notification-agent" containerID="cri-o://96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9" gracePeriod=30 Jan 29 09:27:12 crc kubenswrapper[4771]: I0129 09:27:12.633581 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="sg-core" containerID="cri-o://01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3" gracePeriod=30 Jan 29 09:27:12 crc kubenswrapper[4771]: I0129 09:27:12.667059 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.52484691 podStartE2EDuration="20.667035069s" podCreationTimestamp="2026-01-29 09:26:52 +0000 UTC" firstStartedPulling="2026-01-29 09:26:53.704966828 +0000 UTC m=+1233.827807055" lastFinishedPulling="2026-01-29 09:27:11.847154987 +0000 UTC m=+1251.969995214" observedRunningTime="2026-01-29 09:27:12.663152893 +0000 UTC m=+1252.785993120" watchObservedRunningTime="2026-01-29 09:27:12.667035069 +0000 UTC m=+1252.789875296" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.389243 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.554965 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-combined-ca-bundle\") pod \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.555167 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-log-httpd\") pod \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.555320 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-sg-core-conf-yaml\") pod \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.555379 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-run-httpd\") pod \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.555433 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bs82\" (UniqueName: \"kubernetes.io/projected/cd4d49d2-fcc6-4706-a6b3-6c910995027f-kube-api-access-2bs82\") pod \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.555464 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-config-data\") pod \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.555577 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-scripts\") pod \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\" (UID: \"cd4d49d2-fcc6-4706-a6b3-6c910995027f\") " Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.555587 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cd4d49d2-fcc6-4706-a6b3-6c910995027f" (UID: "cd4d49d2-fcc6-4706-a6b3-6c910995027f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.556069 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.556410 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cd4d49d2-fcc6-4706-a6b3-6c910995027f" (UID: "cd4d49d2-fcc6-4706-a6b3-6c910995027f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.561127 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-scripts" (OuterVolumeSpecName: "scripts") pod "cd4d49d2-fcc6-4706-a6b3-6c910995027f" (UID: "cd4d49d2-fcc6-4706-a6b3-6c910995027f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.562826 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd4d49d2-fcc6-4706-a6b3-6c910995027f-kube-api-access-2bs82" (OuterVolumeSpecName: "kube-api-access-2bs82") pod "cd4d49d2-fcc6-4706-a6b3-6c910995027f" (UID: "cd4d49d2-fcc6-4706-a6b3-6c910995027f"). InnerVolumeSpecName "kube-api-access-2bs82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.589854 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cd4d49d2-fcc6-4706-a6b3-6c910995027f" (UID: "cd4d49d2-fcc6-4706-a6b3-6c910995027f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.632674 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd4d49d2-fcc6-4706-a6b3-6c910995027f" (UID: "cd4d49d2-fcc6-4706-a6b3-6c910995027f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.645590 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerID="371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6" exitCode=0 Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.646525 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerID="01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3" exitCode=2 Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.646605 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerID="96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9" exitCode=0 Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.646666 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerID="0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591" exitCode=0 Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.645669 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerDied","Data":"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6"} Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.646838 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerDied","Data":"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3"} Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.646907 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerDied","Data":"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9"} Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.646984 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerDied","Data":"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591"} Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.647053 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd4d49d2-fcc6-4706-a6b3-6c910995027f","Type":"ContainerDied","Data":"7579f22aeb82a2cf65ce36ee20b65fa12e486962c13cc195a279c18718ccd64b"} Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.645660 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.646997 4771 scope.go:117] "RemoveContainer" containerID="371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.658545 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.658582 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.658598 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.658610 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd4d49d2-fcc6-4706-a6b3-6c910995027f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.658623 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bs82\" (UniqueName: \"kubernetes.io/projected/cd4d49d2-fcc6-4706-a6b3-6c910995027f-kube-api-access-2bs82\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.661420 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-config-data" (OuterVolumeSpecName: "config-data") pod "cd4d49d2-fcc6-4706-a6b3-6c910995027f" (UID: "cd4d49d2-fcc6-4706-a6b3-6c910995027f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.672320 4771 scope.go:117] "RemoveContainer" containerID="01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.696857 4771 scope.go:117] "RemoveContainer" containerID="96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.717901 4771 scope.go:117] "RemoveContainer" containerID="0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.741259 4771 scope.go:117] "RemoveContainer" containerID="371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6" Jan 29 09:27:13 crc kubenswrapper[4771]: E0129 09:27:13.741825 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": container with ID starting with 371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6 not found: ID does not exist" containerID="371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.741873 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6"} err="failed to get container status \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": rpc error: code = NotFound desc = could not find container \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": container with ID starting with 371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.741901 4771 scope.go:117] "RemoveContainer" containerID="01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3" Jan 29 09:27:13 crc kubenswrapper[4771]: E0129 09:27:13.742191 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": container with ID starting with 01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3 not found: ID does not exist" containerID="01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.742225 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3"} err="failed to get container status \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": rpc error: code = NotFound desc = could not find container \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": container with ID starting with 01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.742247 4771 scope.go:117] "RemoveContainer" containerID="96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9" Jan 29 09:27:13 crc kubenswrapper[4771]: E0129 09:27:13.742466 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": container with ID starting with 96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9 not found: ID does not exist" containerID="96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.742490 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9"} err="failed to get container status \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": rpc error: code = NotFound desc = could not find container \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": container with ID starting with 96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.742503 4771 scope.go:117] "RemoveContainer" containerID="0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591" Jan 29 09:27:13 crc kubenswrapper[4771]: E0129 09:27:13.742678 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": container with ID starting with 0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591 not found: ID does not exist" containerID="0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.742718 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591"} err="failed to get container status \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": rpc error: code = NotFound desc = could not find container \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": container with ID starting with 0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.742737 4771 scope.go:117] "RemoveContainer" containerID="371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.742942 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6"} err="failed to get container status \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": rpc error: code = NotFound desc = could not find container \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": container with ID starting with 371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.742964 4771 scope.go:117] "RemoveContainer" containerID="01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.743165 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3"} err="failed to get container status \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": rpc error: code = NotFound desc = could not find container \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": container with ID starting with 01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.743186 4771 scope.go:117] "RemoveContainer" containerID="96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.743376 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9"} err="failed to get container status \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": rpc error: code = NotFound desc = could not find container \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": container with ID starting with 96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.743397 4771 scope.go:117] "RemoveContainer" containerID="0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.743596 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591"} err="failed to get container status \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": rpc error: code = NotFound desc = could not find container \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": container with ID starting with 0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.743624 4771 scope.go:117] "RemoveContainer" containerID="371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.743851 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6"} err="failed to get container status \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": rpc error: code = NotFound desc = could not find container \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": container with ID starting with 371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.743874 4771 scope.go:117] "RemoveContainer" containerID="01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.744068 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3"} err="failed to get container status \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": rpc error: code = NotFound desc = could not find container \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": container with ID starting with 01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.744202 4771 scope.go:117] "RemoveContainer" containerID="96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.744414 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9"} err="failed to get container status \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": rpc error: code = NotFound desc = could not find container \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": container with ID starting with 96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.744434 4771 scope.go:117] "RemoveContainer" containerID="0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.744604 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591"} err="failed to get container status \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": rpc error: code = NotFound desc = could not find container \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": container with ID starting with 0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.744622 4771 scope.go:117] "RemoveContainer" containerID="371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.744825 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6"} err="failed to get container status \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": rpc error: code = NotFound desc = could not find container \"371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6\": container with ID starting with 371cc3e31c78610a5448beadca8952c4f1156a74b9abbee63f3e4ab879268cb6 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.744846 4771 scope.go:117] "RemoveContainer" containerID="01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.745028 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3"} err="failed to get container status \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": rpc error: code = NotFound desc = could not find container \"01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3\": container with ID starting with 01d5d0fc530e59b4651e4c039d741f646345887787eddf9a00c018fb178a2cb3 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.745045 4771 scope.go:117] "RemoveContainer" containerID="96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.745224 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9"} err="failed to get container status \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": rpc error: code = NotFound desc = could not find container \"96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9\": container with ID starting with 96c5afa817c7ee5eb9f94663fd25f129fff1aacac138743c10a815adf3ecdde9 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.745244 4771 scope.go:117] "RemoveContainer" containerID="0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.745420 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591"} err="failed to get container status \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": rpc error: code = NotFound desc = could not find container \"0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591\": container with ID starting with 0b7282949dc18c2f04ba36bca88701038a14f8ad79b6fa4b514bce58d2fe1591 not found: ID does not exist" Jan 29 09:27:13 crc kubenswrapper[4771]: I0129 09:27:13.762524 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4d49d2-fcc6-4706-a6b3-6c910995027f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.058579 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.067379 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.094719 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:14 crc kubenswrapper[4771]: E0129 09:27:14.095448 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="ceilometer-notification-agent" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.095542 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="ceilometer-notification-agent" Jan 29 09:27:14 crc kubenswrapper[4771]: E0129 09:27:14.095624 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="ceilometer-central-agent" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.095767 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="ceilometer-central-agent" Jan 29 09:27:14 crc kubenswrapper[4771]: E0129 09:27:14.095867 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerName="placement-api" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.095944 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerName="placement-api" Jan 29 09:27:14 crc kubenswrapper[4771]: E0129 09:27:14.096036 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerName="placement-log" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.096120 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerName="placement-log" Jan 29 09:27:14 crc kubenswrapper[4771]: E0129 09:27:14.096200 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="sg-core" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.096286 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="sg-core" Jan 29 09:27:14 crc kubenswrapper[4771]: E0129 09:27:14.096389 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="proxy-httpd" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.096476 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="proxy-httpd" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.096805 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerName="placement-api" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.096900 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="ceilometer-notification-agent" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.097078 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ce829b-d48f-4d9e-b6ed-3a076d0e6f77" containerName="placement-log" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.097176 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="sg-core" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.097264 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="ceilometer-central-agent" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.097348 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" containerName="proxy-httpd" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.099495 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.102939 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.103370 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.112194 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.271024 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.271085 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.271132 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.272260 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b3ab7b9f2df880b7295ed04362ae5024763966067568a703d675444eb8c341b"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.272533 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://1b3ab7b9f2df880b7295ed04362ae5024763966067568a703d675444eb8c341b" gracePeriod=600 Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.276103 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qchf4\" (UniqueName: \"kubernetes.io/projected/ea014079-6853-423b-b79a-253692450743-kube-api-access-qchf4\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.276299 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-log-httpd\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.276526 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-run-httpd\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.276784 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.276832 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-scripts\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.276860 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.276912 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-config-data\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.379276 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.379320 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-scripts\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.379339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.379362 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-config-data\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.379400 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qchf4\" (UniqueName: \"kubernetes.io/projected/ea014079-6853-423b-b79a-253692450743-kube-api-access-qchf4\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.379440 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-log-httpd\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.379501 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-run-httpd\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.380027 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-run-httpd\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.381073 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-log-httpd\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.386566 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.387535 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-config-data\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.387729 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-scripts\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.400231 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.409868 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qchf4\" (UniqueName: \"kubernetes.io/projected/ea014079-6853-423b-b79a-253692450743-kube-api-access-qchf4\") pod \"ceilometer-0\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.429137 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.672854 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="1b3ab7b9f2df880b7295ed04362ae5024763966067568a703d675444eb8c341b" exitCode=0 Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.673222 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"1b3ab7b9f2df880b7295ed04362ae5024763966067568a703d675444eb8c341b"} Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.673542 4771 scope.go:117] "RemoveContainer" containerID="20dd9c59445370fa21c3eb4e7a8f0add609e4bdb4bf039a1de172564e5cc26d6" Jan 29 09:27:14 crc kubenswrapper[4771]: I0129 09:27:14.848256 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd4d49d2-fcc6-4706-a6b3-6c910995027f" path="/var/lib/kubelet/pods/cd4d49d2-fcc6-4706-a6b3-6c910995027f/volumes" Jan 29 09:27:15 crc kubenswrapper[4771]: I0129 09:27:15.036454 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:15 crc kubenswrapper[4771]: I0129 09:27:15.684951 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerStarted","Data":"1ece6dc1d49ba7243dcb69781e1ab1e4e97c0b45fce071845e3faf469b89e139"} Jan 29 09:27:15 crc kubenswrapper[4771]: I0129 09:27:15.688227 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"182b687cc8105c26ca625b4fc83c6757431c23f77bbdc17b6ccd0dabae3f7a24"} Jan 29 09:27:16 crc kubenswrapper[4771]: I0129 09:27:16.292336 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:16 crc kubenswrapper[4771]: I0129 09:27:16.699470 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerStarted","Data":"cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d"} Jan 29 09:27:17 crc kubenswrapper[4771]: I0129 09:27:17.817049 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerStarted","Data":"bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34"} Jan 29 09:27:18 crc kubenswrapper[4771]: I0129 09:27:18.850558 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerStarted","Data":"4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d"} Jan 29 09:27:20 crc kubenswrapper[4771]: I0129 09:27:20.875115 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerStarted","Data":"b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9"} Jan 29 09:27:20 crc kubenswrapper[4771]: I0129 09:27:20.875781 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea014079-6853-423b-b79a-253692450743" containerName="ceilometer-central-agent" containerID="cri-o://cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d" gracePeriod=30 Jan 29 09:27:20 crc kubenswrapper[4771]: I0129 09:27:20.875973 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:27:20 crc kubenswrapper[4771]: I0129 09:27:20.876151 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea014079-6853-423b-b79a-253692450743" containerName="proxy-httpd" containerID="cri-o://b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9" gracePeriod=30 Jan 29 09:27:20 crc kubenswrapper[4771]: I0129 09:27:20.876242 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea014079-6853-423b-b79a-253692450743" containerName="sg-core" containerID="cri-o://4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d" gracePeriod=30 Jan 29 09:27:20 crc kubenswrapper[4771]: I0129 09:27:20.876286 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea014079-6853-423b-b79a-253692450743" containerName="ceilometer-notification-agent" containerID="cri-o://bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34" gracePeriod=30 Jan 29 09:27:20 crc kubenswrapper[4771]: I0129 09:27:20.910161 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.961903411 podStartE2EDuration="6.910136309s" podCreationTimestamp="2026-01-29 09:27:14 +0000 UTC" firstStartedPulling="2026-01-29 09:27:15.045995754 +0000 UTC m=+1255.168835981" lastFinishedPulling="2026-01-29 09:27:19.994228652 +0000 UTC m=+1260.117068879" observedRunningTime="2026-01-29 09:27:20.899833098 +0000 UTC m=+1261.022673345" watchObservedRunningTime="2026-01-29 09:27:20.910136309 +0000 UTC m=+1261.032976526" Jan 29 09:27:21 crc kubenswrapper[4771]: I0129 09:27:21.889950 4771 generic.go:334] "Generic (PLEG): container finished" podID="ea014079-6853-423b-b79a-253692450743" containerID="b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9" exitCode=0 Jan 29 09:27:21 crc kubenswrapper[4771]: I0129 09:27:21.890273 4771 generic.go:334] "Generic (PLEG): container finished" podID="ea014079-6853-423b-b79a-253692450743" containerID="4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d" exitCode=2 Jan 29 09:27:21 crc kubenswrapper[4771]: I0129 09:27:21.890288 4771 generic.go:334] "Generic (PLEG): container finished" podID="ea014079-6853-423b-b79a-253692450743" containerID="bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34" exitCode=0 Jan 29 09:27:21 crc kubenswrapper[4771]: I0129 09:27:21.890018 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerDied","Data":"b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9"} Jan 29 09:27:21 crc kubenswrapper[4771]: I0129 09:27:21.890326 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerDied","Data":"4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d"} Jan 29 09:27:21 crc kubenswrapper[4771]: I0129 09:27:21.890339 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerDied","Data":"bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34"} Jan 29 09:27:24 crc kubenswrapper[4771]: I0129 09:27:24.936166 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" event={"ID":"79d3d46b-96ba-460a-a066-52c0a041d34f","Type":"ContainerStarted","Data":"c53dd064e0c166279cf4926de8577748404b7bf625bffdbe87505a19726b8404"} Jan 29 09:27:24 crc kubenswrapper[4771]: I0129 09:27:24.966240 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" podStartSLOduration=1.753445634 podStartE2EDuration="36.96621885s" podCreationTimestamp="2026-01-29 09:26:48 +0000 UTC" firstStartedPulling="2026-01-29 09:26:49.289360783 +0000 UTC m=+1229.412201010" lastFinishedPulling="2026-01-29 09:27:24.502133999 +0000 UTC m=+1264.624974226" observedRunningTime="2026-01-29 09:27:24.963451474 +0000 UTC m=+1265.086291721" watchObservedRunningTime="2026-01-29 09:27:24.96621885 +0000 UTC m=+1265.089059077" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.797863 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.893337 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qchf4\" (UniqueName: \"kubernetes.io/projected/ea014079-6853-423b-b79a-253692450743-kube-api-access-qchf4\") pod \"ea014079-6853-423b-b79a-253692450743\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.893549 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-run-httpd\") pod \"ea014079-6853-423b-b79a-253692450743\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.893595 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-scripts\") pod \"ea014079-6853-423b-b79a-253692450743\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.893667 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-log-httpd\") pod \"ea014079-6853-423b-b79a-253692450743\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.893715 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-config-data\") pod \"ea014079-6853-423b-b79a-253692450743\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.893747 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-sg-core-conf-yaml\") pod \"ea014079-6853-423b-b79a-253692450743\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.893874 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-combined-ca-bundle\") pod \"ea014079-6853-423b-b79a-253692450743\" (UID: \"ea014079-6853-423b-b79a-253692450743\") " Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.894134 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ea014079-6853-423b-b79a-253692450743" (UID: "ea014079-6853-423b-b79a-253692450743"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.894227 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ea014079-6853-423b-b79a-253692450743" (UID: "ea014079-6853-423b-b79a-253692450743"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.894548 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.894580 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea014079-6853-423b-b79a-253692450743-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.899417 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea014079-6853-423b-b79a-253692450743-kube-api-access-qchf4" (OuterVolumeSpecName: "kube-api-access-qchf4") pod "ea014079-6853-423b-b79a-253692450743" (UID: "ea014079-6853-423b-b79a-253692450743"). InnerVolumeSpecName "kube-api-access-qchf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.911142 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-scripts" (OuterVolumeSpecName: "scripts") pod "ea014079-6853-423b-b79a-253692450743" (UID: "ea014079-6853-423b-b79a-253692450743"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.923139 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ea014079-6853-423b-b79a-253692450743" (UID: "ea014079-6853-423b-b79a-253692450743"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.950780 4771 generic.go:334] "Generic (PLEG): container finished" podID="ea014079-6853-423b-b79a-253692450743" containerID="cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d" exitCode=0 Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.950847 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerDied","Data":"cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d"} Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.950904 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.950932 4771 scope.go:117] "RemoveContainer" containerID="b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.950914 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea014079-6853-423b-b79a-253692450743","Type":"ContainerDied","Data":"1ece6dc1d49ba7243dcb69781e1ab1e4e97c0b45fce071845e3faf469b89e139"} Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.988761 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea014079-6853-423b-b79a-253692450743" (UID: "ea014079-6853-423b-b79a-253692450743"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.997057 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.997098 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.997110 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qchf4\" (UniqueName: \"kubernetes.io/projected/ea014079-6853-423b-b79a-253692450743-kube-api-access-qchf4\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:25 crc kubenswrapper[4771]: I0129 09:27:25.997120 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.019184 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-config-data" (OuterVolumeSpecName: "config-data") pod "ea014079-6853-423b-b79a-253692450743" (UID: "ea014079-6853-423b-b79a-253692450743"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.059115 4771 scope.go:117] "RemoveContainer" containerID="4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.081585 4771 scope.go:117] "RemoveContainer" containerID="bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.099468 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea014079-6853-423b-b79a-253692450743-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.112306 4771 scope.go:117] "RemoveContainer" containerID="cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.132058 4771 scope.go:117] "RemoveContainer" containerID="b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9" Jan 29 09:27:26 crc kubenswrapper[4771]: E0129 09:27:26.132626 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9\": container with ID starting with b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9 not found: ID does not exist" containerID="b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.132787 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9"} err="failed to get container status \"b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9\": rpc error: code = NotFound desc = could not find container \"b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9\": container with ID starting with b11e2ceedf00936571a6ec43fc44a1200fedd42308cc78854bbf69cdd23ec4c9 not found: ID does not exist" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.132820 4771 scope.go:117] "RemoveContainer" containerID="4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d" Jan 29 09:27:26 crc kubenswrapper[4771]: E0129 09:27:26.133496 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d\": container with ID starting with 4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d not found: ID does not exist" containerID="4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.133542 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d"} err="failed to get container status \"4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d\": rpc error: code = NotFound desc = could not find container \"4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d\": container with ID starting with 4973c9712cc4f4a6349567e3166f754f1412c3cde64b72b3597dff487071db8d not found: ID does not exist" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.133573 4771 scope.go:117] "RemoveContainer" containerID="bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34" Jan 29 09:27:26 crc kubenswrapper[4771]: E0129 09:27:26.133921 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34\": container with ID starting with bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34 not found: ID does not exist" containerID="bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.133954 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34"} err="failed to get container status \"bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34\": rpc error: code = NotFound desc = could not find container \"bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34\": container with ID starting with bd55caa9c55f88249c0b74d4144889dcd754cc88c573fa8d6866ea46f8b33e34 not found: ID does not exist" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.133975 4771 scope.go:117] "RemoveContainer" containerID="cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d" Jan 29 09:27:26 crc kubenswrapper[4771]: E0129 09:27:26.134234 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d\": container with ID starting with cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d not found: ID does not exist" containerID="cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.134270 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d"} err="failed to get container status \"cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d\": rpc error: code = NotFound desc = could not find container \"cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d\": container with ID starting with cafe54c8291f1f8da894fa7c0027dd1585ba720b290d5fc5ec0476055cd9c78d not found: ID does not exist" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.289529 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.303227 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.318955 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:26 crc kubenswrapper[4771]: E0129 09:27:26.319490 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea014079-6853-423b-b79a-253692450743" containerName="proxy-httpd" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.319514 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea014079-6853-423b-b79a-253692450743" containerName="proxy-httpd" Jan 29 09:27:26 crc kubenswrapper[4771]: E0129 09:27:26.319559 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea014079-6853-423b-b79a-253692450743" containerName="ceilometer-notification-agent" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.319568 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea014079-6853-423b-b79a-253692450743" containerName="ceilometer-notification-agent" Jan 29 09:27:26 crc kubenswrapper[4771]: E0129 09:27:26.319589 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea014079-6853-423b-b79a-253692450743" containerName="ceilometer-central-agent" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.319597 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea014079-6853-423b-b79a-253692450743" containerName="ceilometer-central-agent" Jan 29 09:27:26 crc kubenswrapper[4771]: E0129 09:27:26.319613 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea014079-6853-423b-b79a-253692450743" containerName="sg-core" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.319621 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea014079-6853-423b-b79a-253692450743" containerName="sg-core" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.319869 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea014079-6853-423b-b79a-253692450743" containerName="sg-core" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.319899 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea014079-6853-423b-b79a-253692450743" containerName="ceilometer-notification-agent" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.319918 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea014079-6853-423b-b79a-253692450743" containerName="proxy-httpd" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.319938 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea014079-6853-423b-b79a-253692450743" containerName="ceilometer-central-agent" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.322453 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.325213 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.325276 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.333405 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.405840 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-log-httpd\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.405903 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-run-httpd\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.405938 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-scripts\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.405959 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.406183 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.406493 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6zl6\" (UniqueName: \"kubernetes.io/projected/9da5d24e-199a-41db-a247-048e7fae22a1-kube-api-access-v6zl6\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.406558 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-config-data\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.508750 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-scripts\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.508991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.509062 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.509148 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6zl6\" (UniqueName: \"kubernetes.io/projected/9da5d24e-199a-41db-a247-048e7fae22a1-kube-api-access-v6zl6\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.509172 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-config-data\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.509227 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-log-httpd\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.509260 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-run-httpd\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.510013 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-run-httpd\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.510094 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-log-httpd\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.513128 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.514230 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.515223 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-scripts\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.515979 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-config-data\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.528536 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6zl6\" (UniqueName: \"kubernetes.io/projected/9da5d24e-199a-41db-a247-048e7fae22a1-kube-api-access-v6zl6\") pod \"ceilometer-0\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.639670 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:27:26 crc kubenswrapper[4771]: I0129 09:27:26.855029 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea014079-6853-423b-b79a-253692450743" path="/var/lib/kubelet/pods/ea014079-6853-423b-b79a-253692450743/volumes" Jan 29 09:27:27 crc kubenswrapper[4771]: I0129 09:27:27.164875 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:27:27 crc kubenswrapper[4771]: I0129 09:27:27.976352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerStarted","Data":"7f5562a9150d9a96237103c51bca4734f26a68e6efbfa142319c84604de2c800"} Jan 29 09:27:28 crc kubenswrapper[4771]: I0129 09:27:28.987240 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerStarted","Data":"631b200a269ea89babc1e706e0a8d3068a8b9c2ef95412efae012a6b85b84353"} Jan 29 09:27:29 crc kubenswrapper[4771]: I0129 09:27:29.998329 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerStarted","Data":"2dbf0478281227b10ea315e001b8c032dc8f2bb132d6fdff02a680491ee537ce"} Jan 29 09:27:31 crc kubenswrapper[4771]: I0129 09:27:31.010370 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerStarted","Data":"aae3234fdd87e267f0d537208708be9ac3820446382bdd862bc55875a4b02637"} Jan 29 09:27:32 crc kubenswrapper[4771]: I0129 09:27:32.023450 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerStarted","Data":"042914a4791bcaa9f38ee7a6859e5ab9eccc3e641aecab635856a2ddcbd5c0f0"} Jan 29 09:27:32 crc kubenswrapper[4771]: I0129 09:27:32.046388 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.588213685 podStartE2EDuration="6.046366543s" podCreationTimestamp="2026-01-29 09:27:26 +0000 UTC" firstStartedPulling="2026-01-29 09:27:27.182058257 +0000 UTC m=+1267.304898484" lastFinishedPulling="2026-01-29 09:27:31.640211115 +0000 UTC m=+1271.763051342" observedRunningTime="2026-01-29 09:27:32.043769402 +0000 UTC m=+1272.166609639" watchObservedRunningTime="2026-01-29 09:27:32.046366543 +0000 UTC m=+1272.169206780" Jan 29 09:27:33 crc kubenswrapper[4771]: I0129 09:27:33.031896 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:27:37 crc kubenswrapper[4771]: I0129 09:27:37.073212 4771 generic.go:334] "Generic (PLEG): container finished" podID="79d3d46b-96ba-460a-a066-52c0a041d34f" containerID="c53dd064e0c166279cf4926de8577748404b7bf625bffdbe87505a19726b8404" exitCode=0 Jan 29 09:27:37 crc kubenswrapper[4771]: I0129 09:27:37.073356 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" event={"ID":"79d3d46b-96ba-460a-a066-52c0a041d34f","Type":"ContainerDied","Data":"c53dd064e0c166279cf4926de8577748404b7bf625bffdbe87505a19726b8404"} Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.447511 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.577020 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rh2d\" (UniqueName: \"kubernetes.io/projected/79d3d46b-96ba-460a-a066-52c0a041d34f-kube-api-access-7rh2d\") pod \"79d3d46b-96ba-460a-a066-52c0a041d34f\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.577149 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-config-data\") pod \"79d3d46b-96ba-460a-a066-52c0a041d34f\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.577500 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-scripts\") pod \"79d3d46b-96ba-460a-a066-52c0a041d34f\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.577625 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-combined-ca-bundle\") pod \"79d3d46b-96ba-460a-a066-52c0a041d34f\" (UID: \"79d3d46b-96ba-460a-a066-52c0a041d34f\") " Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.583866 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d3d46b-96ba-460a-a066-52c0a041d34f-kube-api-access-7rh2d" (OuterVolumeSpecName: "kube-api-access-7rh2d") pod "79d3d46b-96ba-460a-a066-52c0a041d34f" (UID: "79d3d46b-96ba-460a-a066-52c0a041d34f"). InnerVolumeSpecName "kube-api-access-7rh2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.584319 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-scripts" (OuterVolumeSpecName: "scripts") pod "79d3d46b-96ba-460a-a066-52c0a041d34f" (UID: "79d3d46b-96ba-460a-a066-52c0a041d34f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.608667 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d3d46b-96ba-460a-a066-52c0a041d34f" (UID: "79d3d46b-96ba-460a-a066-52c0a041d34f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.610680 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-config-data" (OuterVolumeSpecName: "config-data") pod "79d3d46b-96ba-460a-a066-52c0a041d34f" (UID: "79d3d46b-96ba-460a-a066-52c0a041d34f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.680616 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.680654 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rh2d\" (UniqueName: \"kubernetes.io/projected/79d3d46b-96ba-460a-a066-52c0a041d34f-kube-api-access-7rh2d\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.680687 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:38 crc kubenswrapper[4771]: I0129 09:27:38.680707 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3d46b-96ba-460a-a066-52c0a041d34f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.111506 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" event={"ID":"79d3d46b-96ba-460a-a066-52c0a041d34f","Type":"ContainerDied","Data":"d95b8071b34b8c79eb79741d0f5dd607c6f1d2278c1235410dd3df0528a536f7"} Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.111553 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95b8071b34b8c79eb79741d0f5dd607c6f1d2278c1235410dd3df0528a536f7" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.111636 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4vlnn" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.194132 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:27:39 crc kubenswrapper[4771]: E0129 09:27:39.194655 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d3d46b-96ba-460a-a066-52c0a041d34f" containerName="nova-cell0-conductor-db-sync" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.194680 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d3d46b-96ba-460a-a066-52c0a041d34f" containerName="nova-cell0-conductor-db-sync" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.194949 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d3d46b-96ba-460a-a066-52c0a041d34f" containerName="nova-cell0-conductor-db-sync" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.195623 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.198512 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-frbg2" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.202820 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.219053 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.293299 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dddef1c-bcb1-48f7-816d-d24276dd7571-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.293367 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9842\" (UniqueName: \"kubernetes.io/projected/2dddef1c-bcb1-48f7-816d-d24276dd7571-kube-api-access-t9842\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.293514 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dddef1c-bcb1-48f7-816d-d24276dd7571-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.395031 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dddef1c-bcb1-48f7-816d-d24276dd7571-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.395151 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dddef1c-bcb1-48f7-816d-d24276dd7571-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.395185 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9842\" (UniqueName: \"kubernetes.io/projected/2dddef1c-bcb1-48f7-816d-d24276dd7571-kube-api-access-t9842\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.399852 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dddef1c-bcb1-48f7-816d-d24276dd7571-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.415991 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dddef1c-bcb1-48f7-816d-d24276dd7571-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.418650 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9842\" (UniqueName: \"kubernetes.io/projected/2dddef1c-bcb1-48f7-816d-d24276dd7571-kube-api-access-t9842\") pod \"nova-cell0-conductor-0\" (UID: \"2dddef1c-bcb1-48f7-816d-d24276dd7571\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.516287 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:39 crc kubenswrapper[4771]: I0129 09:27:39.988798 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:27:40 crc kubenswrapper[4771]: I0129 09:27:40.122860 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2dddef1c-bcb1-48f7-816d-d24276dd7571","Type":"ContainerStarted","Data":"e39bebf63ccb9da53e7f04c976a694ca3fc5e6a6d72233e761ae7471268cb92a"} Jan 29 09:27:41 crc kubenswrapper[4771]: I0129 09:27:41.138830 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2dddef1c-bcb1-48f7-816d-d24276dd7571","Type":"ContainerStarted","Data":"e67972501d0fe825830ce56878726c538c11e0b468244f3042966e71108b12fc"} Jan 29 09:27:41 crc kubenswrapper[4771]: I0129 09:27:41.139161 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:41 crc kubenswrapper[4771]: I0129 09:27:41.160470 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.160447271 podStartE2EDuration="2.160447271s" podCreationTimestamp="2026-01-29 09:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:27:41.152872514 +0000 UTC m=+1281.275712751" watchObservedRunningTime="2026-01-29 09:27:41.160447271 +0000 UTC m=+1281.283287498" Jan 29 09:27:49 crc kubenswrapper[4771]: I0129 09:27:49.546975 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.042402 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4dv79"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.044348 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.047095 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.053110 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4dv79"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.055897 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.148362 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-scripts\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.148463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-config-data\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.148613 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.148662 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdr7b\" (UniqueName: \"kubernetes.io/projected/9ee2c782-d165-4a0b-bd83-9f506dd349b1-kube-api-access-mdr7b\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.211475 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.213119 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.216502 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.239818 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.250133 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-config-data\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.250249 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.250283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdr7b\" (UniqueName: \"kubernetes.io/projected/9ee2c782-d165-4a0b-bd83-9f506dd349b1-kube-api-access-mdr7b\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.250369 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-scripts\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.261349 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-config-data\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.272909 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.275582 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdr7b\" (UniqueName: \"kubernetes.io/projected/9ee2c782-d165-4a0b-bd83-9f506dd349b1-kube-api-access-mdr7b\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.315561 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-scripts\") pod \"nova-cell0-cell-mapping-4dv79\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.319756 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.333227 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.340858 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.353043 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgs47\" (UniqueName: \"kubernetes.io/projected/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-kube-api-access-fgs47\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.353137 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.353223 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-config-data\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.370610 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.372819 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.374860 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.384839 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465229 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-logs\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465308 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgs47\" (UniqueName: \"kubernetes.io/projected/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-kube-api-access-fgs47\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465337 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465361 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465394 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-config-data\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465419 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465503 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465540 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzw7j\" (UniqueName: \"kubernetes.io/projected/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-kube-api-access-vzw7j\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465611 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-config-data\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.465663 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvf7v\" (UniqueName: \"kubernetes.io/projected/be7dbf88-c4fe-4169-8794-d2f6880bdf07-kube-api-access-fvf7v\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.493484 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-config-data\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.517951 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgs47\" (UniqueName: \"kubernetes.io/projected/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-kube-api-access-fgs47\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.522822 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.548688 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.571243 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.580544 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzw7j\" (UniqueName: \"kubernetes.io/projected/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-kube-api-access-vzw7j\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.580736 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvf7v\" (UniqueName: \"kubernetes.io/projected/be7dbf88-c4fe-4169-8794-d2f6880bdf07-kube-api-access-fvf7v\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.580811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-logs\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.580875 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.580904 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.580949 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-config-data\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.580979 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.582383 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.582696 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-logs\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.597076 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.597471 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-config-data\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.602537 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.606941 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.612883 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f4d8f7df9-nzhdn"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.614627 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvf7v\" (UniqueName: \"kubernetes.io/projected/be7dbf88-c4fe-4169-8794-d2f6880bdf07-kube-api-access-fvf7v\") pod \"nova-cell1-novncproxy-0\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.622212 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzw7j\" (UniqueName: \"kubernetes.io/projected/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-kube-api-access-vzw7j\") pod \"nova-metadata-0\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.631239 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4d8f7df9-nzhdn"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.631619 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.654183 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.656577 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.659569 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.675399 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.789983 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-config-data\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790037 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cf48cb9-50ec-4565-a949-7175c133f3e7-logs\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790075 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-config\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790111 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790139 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvwd\" (UniqueName: \"kubernetes.io/projected/98006db9-8ac9-4fc6-b552-8fc014985454-kube-api-access-2zvwd\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790211 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790240 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-svc\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790255 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790296 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cpq5\" (UniqueName: \"kubernetes.io/projected/2cf48cb9-50ec-4565-a949-7175c133f3e7-kube-api-access-6cpq5\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.790326 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.859269 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.899655 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-config-data\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900307 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cf48cb9-50ec-4565-a949-7175c133f3e7-logs\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900354 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-config\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900423 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900449 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zvwd\" (UniqueName: \"kubernetes.io/projected/98006db9-8ac9-4fc6-b552-8fc014985454-kube-api-access-2zvwd\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900563 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900639 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-svc\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900659 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900729 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cpq5\" (UniqueName: \"kubernetes.io/projected/2cf48cb9-50ec-4565-a949-7175c133f3e7-kube-api-access-6cpq5\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.900752 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.901549 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.902080 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-svc\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.902811 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-config\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.903229 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cf48cb9-50ec-4565-a949-7175c133f3e7-logs\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.904047 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.904606 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.915503 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-config-data\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.916057 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.927171 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.934902 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cpq5\" (UniqueName: \"kubernetes.io/projected/2cf48cb9-50ec-4565-a949-7175c133f3e7-kube-api-access-6cpq5\") pod \"nova-api-0\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " pod="openstack/nova-api-0" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.937726 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zvwd\" (UniqueName: \"kubernetes.io/projected/98006db9-8ac9-4fc6-b552-8fc014985454-kube-api-access-2zvwd\") pod \"dnsmasq-dns-5f4d8f7df9-nzhdn\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.975297 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:50 crc kubenswrapper[4771]: I0129 09:27:50.991142 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.249756 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.290477 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4dv79"] Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.530027 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kj4q4"] Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.531594 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.535757 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.536000 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.583790 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kj4q4"] Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.622112 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-config-data\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.622274 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.622310 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-scripts\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.622520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz95t\" (UniqueName: \"kubernetes.io/projected/6e458d2e-a019-469c-aee4-869073bfd47b-kube-api-access-wz95t\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.634101 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.724868 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.724916 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-scripts\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.724943 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz95t\" (UniqueName: \"kubernetes.io/projected/6e458d2e-a019-469c-aee4-869073bfd47b-kube-api-access-wz95t\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.725255 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-config-data\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.730716 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.730702 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-config-data\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.733312 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-scripts\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.760359 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz95t\" (UniqueName: \"kubernetes.io/projected/6e458d2e-a019-469c-aee4-869073bfd47b-kube-api-access-wz95t\") pod \"nova-cell1-conductor-db-sync-kj4q4\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.850664 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4d8f7df9-nzhdn"] Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.930310 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.933666 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:51 crc kubenswrapper[4771]: W0129 09:27:51.940848 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20561ef4_5ba0_4698_8cb2_5d1c6f99ca46.slice/crio-f3c229f5b4763582895c2c6dd037715c7d0da00738574fea77ac8daedc0665d5 WatchSource:0}: Error finding container f3c229f5b4763582895c2c6dd037715c7d0da00738574fea77ac8daedc0665d5: Status 404 returned error can't find the container with id f3c229f5b4763582895c2c6dd037715c7d0da00738574fea77ac8daedc0665d5 Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.944628 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:27:51 crc kubenswrapper[4771]: I0129 09:27:51.945066 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:27:51 crc kubenswrapper[4771]: W0129 09:27:51.950787 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cf48cb9_50ec_4565_a949_7175c133f3e7.slice/crio-c58c435400fe024e3f1438bf224ea335cc2f9aaa34475deae77336b8bf3a11a4 WatchSource:0}: Error finding container c58c435400fe024e3f1438bf224ea335cc2f9aaa34475deae77336b8bf3a11a4: Status 404 returned error can't find the container with id c58c435400fe024e3f1438bf224ea335cc2f9aaa34475deae77336b8bf3a11a4 Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.301560 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"612bc79f-9cad-4a4c-b575-1b3d5a6a6099","Type":"ContainerStarted","Data":"3d6c51fcf71ce376ba548e8af67e7097a8f77ef17039dd2c624214e88b45ce4c"} Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.305045 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cf48cb9-50ec-4565-a949-7175c133f3e7","Type":"ContainerStarted","Data":"c58c435400fe024e3f1438bf224ea335cc2f9aaa34475deae77336b8bf3a11a4"} Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.311786 4771 generic.go:334] "Generic (PLEG): container finished" podID="98006db9-8ac9-4fc6-b552-8fc014985454" containerID="b3e8dc3d10842aab2e728f6eb25be6d42936ffa4b4fd9a07e300dcf9c3bdfc01" exitCode=0 Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.312013 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" event={"ID":"98006db9-8ac9-4fc6-b552-8fc014985454","Type":"ContainerDied","Data":"b3e8dc3d10842aab2e728f6eb25be6d42936ffa4b4fd9a07e300dcf9c3bdfc01"} Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.312064 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" event={"ID":"98006db9-8ac9-4fc6-b552-8fc014985454","Type":"ContainerStarted","Data":"d6db7ad1a1ee7a50765feab01023f67ef40117bf4c867a174bbd24501efc99b0"} Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.317573 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46","Type":"ContainerStarted","Data":"f3c229f5b4763582895c2c6dd037715c7d0da00738574fea77ac8daedc0665d5"} Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.331429 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4dv79" event={"ID":"9ee2c782-d165-4a0b-bd83-9f506dd349b1","Type":"ContainerStarted","Data":"cad377085e919a01bef4cc3703f76e157fa912ada11dbde61abdaa398f83fdfc"} Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.331497 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4dv79" event={"ID":"9ee2c782-d165-4a0b-bd83-9f506dd349b1","Type":"ContainerStarted","Data":"2ce077596638b6680156f2a49ac6fdd5cef855f9be6eaa74aa299f96cc5fbd82"} Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.349126 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"be7dbf88-c4fe-4169-8794-d2f6880bdf07","Type":"ContainerStarted","Data":"3dcfc678bfa60db8596350d24d52dbb208b7e1a8679afc13084db3402845c6bb"} Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.358779 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4dv79" podStartSLOduration=2.35875705 podStartE2EDuration="2.35875705s" podCreationTimestamp="2026-01-29 09:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:27:52.357869656 +0000 UTC m=+1292.480709883" watchObservedRunningTime="2026-01-29 09:27:52.35875705 +0000 UTC m=+1292.481597277" Jan 29 09:27:52 crc kubenswrapper[4771]: I0129 09:27:52.531239 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kj4q4"] Jan 29 09:27:52 crc kubenswrapper[4771]: E0129 09:27:52.547879 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98006db9_8ac9_4fc6_b552_8fc014985454.slice/crio-b3e8dc3d10842aab2e728f6eb25be6d42936ffa4b4fd9a07e300dcf9c3bdfc01.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98006db9_8ac9_4fc6_b552_8fc014985454.slice/crio-conmon-b3e8dc3d10842aab2e728f6eb25be6d42936ffa4b4fd9a07e300dcf9c3bdfc01.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:27:52 crc kubenswrapper[4771]: W0129 09:27:52.566153 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e458d2e_a019_469c_aee4_869073bfd47b.slice/crio-9ad11d87ee266de1d421da8368e94fe442729754e43be766c49161861f932047 WatchSource:0}: Error finding container 9ad11d87ee266de1d421da8368e94fe442729754e43be766c49161861f932047: Status 404 returned error can't find the container with id 9ad11d87ee266de1d421da8368e94fe442729754e43be766c49161861f932047 Jan 29 09:27:53 crc kubenswrapper[4771]: I0129 09:27:53.378086 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" event={"ID":"6e458d2e-a019-469c-aee4-869073bfd47b","Type":"ContainerStarted","Data":"92df250051b5a4b3817ccfc9478c4f6d465f2e4078aff42e30781aa97c307fb5"} Jan 29 09:27:53 crc kubenswrapper[4771]: I0129 09:27:53.378491 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" event={"ID":"6e458d2e-a019-469c-aee4-869073bfd47b","Type":"ContainerStarted","Data":"9ad11d87ee266de1d421da8368e94fe442729754e43be766c49161861f932047"} Jan 29 09:27:53 crc kubenswrapper[4771]: I0129 09:27:53.381190 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" event={"ID":"98006db9-8ac9-4fc6-b552-8fc014985454","Type":"ContainerStarted","Data":"6ba753077ad45f25fa52743cca5f02aa434c4b1d23c1b7f71c76868b8ca3bebb"} Jan 29 09:27:53 crc kubenswrapper[4771]: I0129 09:27:53.381383 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:27:53 crc kubenswrapper[4771]: I0129 09:27:53.397162 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" podStartSLOduration=2.397137524 podStartE2EDuration="2.397137524s" podCreationTimestamp="2026-01-29 09:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:27:53.396775934 +0000 UTC m=+1293.519616171" watchObservedRunningTime="2026-01-29 09:27:53.397137524 +0000 UTC m=+1293.519977751" Jan 29 09:27:53 crc kubenswrapper[4771]: I0129 09:27:53.440367 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" podStartSLOduration=3.440340894 podStartE2EDuration="3.440340894s" podCreationTimestamp="2026-01-29 09:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:27:53.424944694 +0000 UTC m=+1293.547784931" watchObservedRunningTime="2026-01-29 09:27:53.440340894 +0000 UTC m=+1293.563181121" Jan 29 09:27:54 crc kubenswrapper[4771]: I0129 09:27:54.230529 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:54 crc kubenswrapper[4771]: I0129 09:27:54.251535 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:27:55 crc kubenswrapper[4771]: I0129 09:27:55.402133 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"be7dbf88-c4fe-4169-8794-d2f6880bdf07","Type":"ContainerStarted","Data":"a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc"} Jan 29 09:27:55 crc kubenswrapper[4771]: I0129 09:27:55.403575 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="be7dbf88-c4fe-4169-8794-d2f6880bdf07" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc" gracePeriod=30 Jan 29 09:27:55 crc kubenswrapper[4771]: I0129 09:27:55.412933 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cf48cb9-50ec-4565-a949-7175c133f3e7","Type":"ContainerStarted","Data":"a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f"} Jan 29 09:27:55 crc kubenswrapper[4771]: I0129 09:27:55.430958 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.247624052 podStartE2EDuration="5.430937116s" podCreationTimestamp="2026-01-29 09:27:50 +0000 UTC" firstStartedPulling="2026-01-29 09:27:51.666673929 +0000 UTC m=+1291.789514156" lastFinishedPulling="2026-01-29 09:27:54.849986993 +0000 UTC m=+1294.972827220" observedRunningTime="2026-01-29 09:27:55.421799727 +0000 UTC m=+1295.544639964" watchObservedRunningTime="2026-01-29 09:27:55.430937116 +0000 UTC m=+1295.553777343" Jan 29 09:27:55 crc kubenswrapper[4771]: I0129 09:27:55.861687 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.432658 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"612bc79f-9cad-4a4c-b575-1b3d5a6a6099","Type":"ContainerStarted","Data":"a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67"} Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.435065 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cf48cb9-50ec-4565-a949-7175c133f3e7","Type":"ContainerStarted","Data":"5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27"} Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.440972 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46","Type":"ContainerStarted","Data":"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9"} Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.441249 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46","Type":"ContainerStarted","Data":"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314"} Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.441532 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerName="nova-metadata-log" containerID="cri-o://b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9" gracePeriod=30 Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.442133 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerName="nova-metadata-metadata" containerID="cri-o://3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314" gracePeriod=30 Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.466687 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.006394267 podStartE2EDuration="6.466659977s" podCreationTimestamp="2026-01-29 09:27:50 +0000 UTC" firstStartedPulling="2026-01-29 09:27:51.399020307 +0000 UTC m=+1291.521860534" lastFinishedPulling="2026-01-29 09:27:54.859286017 +0000 UTC m=+1294.982126244" observedRunningTime="2026-01-29 09:27:56.45945261 +0000 UTC m=+1296.582292837" watchObservedRunningTime="2026-01-29 09:27:56.466659977 +0000 UTC m=+1296.589500224" Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.495675 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.584143124 podStartE2EDuration="6.49565813s" podCreationTimestamp="2026-01-29 09:27:50 +0000 UTC" firstStartedPulling="2026-01-29 09:27:51.944384018 +0000 UTC m=+1292.067224245" lastFinishedPulling="2026-01-29 09:27:54.855899024 +0000 UTC m=+1294.978739251" observedRunningTime="2026-01-29 09:27:56.48030003 +0000 UTC m=+1296.603140267" watchObservedRunningTime="2026-01-29 09:27:56.49565813 +0000 UTC m=+1296.618498357" Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.509716 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.614817811 podStartE2EDuration="6.509679583s" podCreationTimestamp="2026-01-29 09:27:50 +0000 UTC" firstStartedPulling="2026-01-29 09:27:51.957680201 +0000 UTC m=+1292.080520428" lastFinishedPulling="2026-01-29 09:27:54.852541963 +0000 UTC m=+1294.975382200" observedRunningTime="2026-01-29 09:27:56.500602465 +0000 UTC m=+1296.623442692" watchObservedRunningTime="2026-01-29 09:27:56.509679583 +0000 UTC m=+1296.632519810" Jan 29 09:27:56 crc kubenswrapper[4771]: I0129 09:27:56.655377 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.095424 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.208263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzw7j\" (UniqueName: \"kubernetes.io/projected/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-kube-api-access-vzw7j\") pod \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.208434 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-config-data\") pod \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.208525 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-logs\") pod \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.208598 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-combined-ca-bundle\") pod \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\" (UID: \"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46\") " Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.211827 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-logs" (OuterVolumeSpecName: "logs") pod "20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" (UID: "20561ef4-5ba0-4698-8cb2-5d1c6f99ca46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.242852 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-kube-api-access-vzw7j" (OuterVolumeSpecName: "kube-api-access-vzw7j") pod "20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" (UID: "20561ef4-5ba0-4698-8cb2-5d1c6f99ca46"). InnerVolumeSpecName "kube-api-access-vzw7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.249990 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-config-data" (OuterVolumeSpecName: "config-data") pod "20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" (UID: "20561ef4-5ba0-4698-8cb2-5d1c6f99ca46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.250823 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" (UID: "20561ef4-5ba0-4698-8cb2-5d1c6f99ca46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.311757 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.311807 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzw7j\" (UniqueName: \"kubernetes.io/projected/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-kube-api-access-vzw7j\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.311825 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.311837 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.470345 4771 generic.go:334] "Generic (PLEG): container finished" podID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerID="3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314" exitCode=0 Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.470396 4771 generic.go:334] "Generic (PLEG): container finished" podID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerID="b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9" exitCode=143 Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.470419 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.470422 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46","Type":"ContainerDied","Data":"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314"} Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.470612 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46","Type":"ContainerDied","Data":"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9"} Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.470624 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20561ef4-5ba0-4698-8cb2-5d1c6f99ca46","Type":"ContainerDied","Data":"f3c229f5b4763582895c2c6dd037715c7d0da00738574fea77ac8daedc0665d5"} Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.470641 4771 scope.go:117] "RemoveContainer" containerID="3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.502658 4771 scope.go:117] "RemoveContainer" containerID="b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.520828 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.539112 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.544099 4771 scope.go:117] "RemoveContainer" containerID="3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314" Jan 29 09:27:57 crc kubenswrapper[4771]: E0129 09:27:57.544676 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314\": container with ID starting with 3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314 not found: ID does not exist" containerID="3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.544739 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314"} err="failed to get container status \"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314\": rpc error: code = NotFound desc = could not find container \"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314\": container with ID starting with 3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314 not found: ID does not exist" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.544765 4771 scope.go:117] "RemoveContainer" containerID="b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9" Jan 29 09:27:57 crc kubenswrapper[4771]: E0129 09:27:57.548856 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9\": container with ID starting with b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9 not found: ID does not exist" containerID="b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.548904 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9"} err="failed to get container status \"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9\": rpc error: code = NotFound desc = could not find container \"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9\": container with ID starting with b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9 not found: ID does not exist" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.548933 4771 scope.go:117] "RemoveContainer" containerID="3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.549763 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314"} err="failed to get container status \"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314\": rpc error: code = NotFound desc = could not find container \"3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314\": container with ID starting with 3dddcd088923c6220d08ef0955350cad66689ae04c64d6c944c55c0ea327c314 not found: ID does not exist" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.549806 4771 scope.go:117] "RemoveContainer" containerID="b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.550234 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9"} err="failed to get container status \"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9\": rpc error: code = NotFound desc = could not find container \"b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9\": container with ID starting with b6ef54757a036bace589d0c43eccf8998f9bef0121148c49db88883319fe9cc9 not found: ID does not exist" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.556756 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:57 crc kubenswrapper[4771]: E0129 09:27:57.557279 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerName="nova-metadata-metadata" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.557291 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerName="nova-metadata-metadata" Jan 29 09:27:57 crc kubenswrapper[4771]: E0129 09:27:57.557306 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerName="nova-metadata-log" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.557312 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerName="nova-metadata-log" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.557507 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerName="nova-metadata-metadata" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.557516 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" containerName="nova-metadata-log" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.558571 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.563611 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.563945 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.567085 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.622940 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-logs\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.623260 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.623533 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.623596 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-config-data\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.623739 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrpq\" (UniqueName: \"kubernetes.io/projected/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-kube-api-access-dwrpq\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.726406 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrpq\" (UniqueName: \"kubernetes.io/projected/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-kube-api-access-dwrpq\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.726524 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-logs\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.726563 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.726736 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.726769 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-config-data\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.726951 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-logs\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.732948 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.733355 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-config-data\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.733437 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.746573 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrpq\" (UniqueName: \"kubernetes.io/projected/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-kube-api-access-dwrpq\") pod \"nova-metadata-0\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " pod="openstack/nova-metadata-0" Jan 29 09:27:57 crc kubenswrapper[4771]: I0129 09:27:57.894007 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:27:58 crc kubenswrapper[4771]: I0129 09:27:58.368437 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:27:58 crc kubenswrapper[4771]: I0129 09:27:58.510145 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f","Type":"ContainerStarted","Data":"a63dfb32cf61c649239044baf0361648876c842485f3678d344d76dbe21dfd42"} Jan 29 09:27:58 crc kubenswrapper[4771]: I0129 09:27:58.848507 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20561ef4-5ba0-4698-8cb2-5d1c6f99ca46" path="/var/lib/kubelet/pods/20561ef4-5ba0-4698-8cb2-5d1c6f99ca46/volumes" Jan 29 09:27:59 crc kubenswrapper[4771]: I0129 09:27:59.525931 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f","Type":"ContainerStarted","Data":"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978"} Jan 29 09:27:59 crc kubenswrapper[4771]: I0129 09:27:59.525997 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f","Type":"ContainerStarted","Data":"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd"} Jan 29 09:27:59 crc kubenswrapper[4771]: I0129 09:27:59.546074 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.54605287 podStartE2EDuration="2.54605287s" podCreationTimestamp="2026-01-29 09:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:27:59.545014932 +0000 UTC m=+1299.667855179" watchObservedRunningTime="2026-01-29 09:27:59.54605287 +0000 UTC m=+1299.668893097" Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.520549 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.521108 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3004cf2e-c4f0-45ba-a5f5-ade209c47247" containerName="kube-state-metrics" containerID="cri-o://927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529" gracePeriod=30 Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.539168 4771 generic.go:334] "Generic (PLEG): container finished" podID="9ee2c782-d165-4a0b-bd83-9f506dd349b1" containerID="cad377085e919a01bef4cc3703f76e157fa912ada11dbde61abdaa398f83fdfc" exitCode=0 Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.540185 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4dv79" event={"ID":"9ee2c782-d165-4a0b-bd83-9f506dd349b1","Type":"ContainerDied","Data":"cad377085e919a01bef4cc3703f76e157fa912ada11dbde61abdaa398f83fdfc"} Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.549824 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.549944 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.583404 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.976949 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.992418 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:28:00 crc kubenswrapper[4771]: I0129 09:28:00.992484 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.034325 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.063048 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756b74b74c-jfsqx"] Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.063397 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" podUID="4a60492e-2744-495e-ac7b-d6ff1d970385" containerName="dnsmasq-dns" containerID="cri-o://835b2559987e6b803ee9cd560a77532092d946e3b39333ae2360b378efe0c87d" gracePeriod=10 Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.108813 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9rvh\" (UniqueName: \"kubernetes.io/projected/3004cf2e-c4f0-45ba-a5f5-ade209c47247-kube-api-access-j9rvh\") pod \"3004cf2e-c4f0-45ba-a5f5-ade209c47247\" (UID: \"3004cf2e-c4f0-45ba-a5f5-ade209c47247\") " Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.118964 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3004cf2e-c4f0-45ba-a5f5-ade209c47247-kube-api-access-j9rvh" (OuterVolumeSpecName: "kube-api-access-j9rvh") pod "3004cf2e-c4f0-45ba-a5f5-ade209c47247" (UID: "3004cf2e-c4f0-45ba-a5f5-ade209c47247"). InnerVolumeSpecName "kube-api-access-j9rvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.215058 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9rvh\" (UniqueName: \"kubernetes.io/projected/3004cf2e-c4f0-45ba-a5f5-ade209c47247-kube-api-access-j9rvh\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.579891 4771 generic.go:334] "Generic (PLEG): container finished" podID="3004cf2e-c4f0-45ba-a5f5-ade209c47247" containerID="927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529" exitCode=2 Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.580135 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.581207 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3004cf2e-c4f0-45ba-a5f5-ade209c47247","Type":"ContainerDied","Data":"927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529"} Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.581310 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3004cf2e-c4f0-45ba-a5f5-ade209c47247","Type":"ContainerDied","Data":"a527de3e03505bd01003fe3fd25a7ab029f98c952c20e15900ce335dcdd16eea"} Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.581367 4771 scope.go:117] "RemoveContainer" containerID="927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.593802 4771 generic.go:334] "Generic (PLEG): container finished" podID="4a60492e-2744-495e-ac7b-d6ff1d970385" containerID="835b2559987e6b803ee9cd560a77532092d946e3b39333ae2360b378efe0c87d" exitCode=0 Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.594058 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" event={"ID":"4a60492e-2744-495e-ac7b-d6ff1d970385","Type":"ContainerDied","Data":"835b2559987e6b803ee9cd560a77532092d946e3b39333ae2360b378efe0c87d"} Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.595563 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.629136 4771 scope.go:117] "RemoveContainer" containerID="927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529" Jan 29 09:28:01 crc kubenswrapper[4771]: E0129 09:28:01.630638 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529\": container with ID starting with 927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529 not found: ID does not exist" containerID="927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.630788 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529"} err="failed to get container status \"927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529\": rpc error: code = NotFound desc = could not find container \"927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529\": container with ID starting with 927dd9f5b2488e669188e7950134486fda2508984da0958c3c016b88639ce529 not found: ID does not exist" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.635674 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-sb\") pod \"4a60492e-2744-495e-ac7b-d6ff1d970385\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.636103 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-svc\") pod \"4a60492e-2744-495e-ac7b-d6ff1d970385\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.636236 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4shsv\" (UniqueName: \"kubernetes.io/projected/4a60492e-2744-495e-ac7b-d6ff1d970385-kube-api-access-4shsv\") pod \"4a60492e-2744-495e-ac7b-d6ff1d970385\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.636457 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-swift-storage-0\") pod \"4a60492e-2744-495e-ac7b-d6ff1d970385\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.636530 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-config\") pod \"4a60492e-2744-495e-ac7b-d6ff1d970385\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.636660 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-nb\") pod \"4a60492e-2744-495e-ac7b-d6ff1d970385\" (UID: \"4a60492e-2744-495e-ac7b-d6ff1d970385\") " Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.652594 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a60492e-2744-495e-ac7b-d6ff1d970385-kube-api-access-4shsv" (OuterVolumeSpecName: "kube-api-access-4shsv") pod "4a60492e-2744-495e-ac7b-d6ff1d970385" (UID: "4a60492e-2744-495e-ac7b-d6ff1d970385"). InnerVolumeSpecName "kube-api-access-4shsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.715969 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a60492e-2744-495e-ac7b-d6ff1d970385" (UID: "4a60492e-2744-495e-ac7b-d6ff1d970385"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.735005 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a60492e-2744-495e-ac7b-d6ff1d970385" (UID: "4a60492e-2744-495e-ac7b-d6ff1d970385"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.749482 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.749538 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4shsv\" (UniqueName: \"kubernetes.io/projected/4a60492e-2744-495e-ac7b-d6ff1d970385-kube-api-access-4shsv\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.749551 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.755798 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.764656 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a60492e-2744-495e-ac7b-d6ff1d970385" (UID: "4a60492e-2744-495e-ac7b-d6ff1d970385"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.766142 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-config" (OuterVolumeSpecName: "config") pod "4a60492e-2744-495e-ac7b-d6ff1d970385" (UID: "4a60492e-2744-495e-ac7b-d6ff1d970385"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.845501 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a60492e-2744-495e-ac7b-d6ff1d970385" (UID: "4a60492e-2744-495e-ac7b-d6ff1d970385"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.852203 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.852238 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.852347 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a60492e-2744-495e-ac7b-d6ff1d970385-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.988016 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:28:01 crc kubenswrapper[4771]: I0129 09:28:01.996899 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.005519 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.035840 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:28:02 crc kubenswrapper[4771]: E0129 09:28:02.036348 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee2c782-d165-4a0b-bd83-9f506dd349b1" containerName="nova-manage" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.036368 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee2c782-d165-4a0b-bd83-9f506dd349b1" containerName="nova-manage" Jan 29 09:28:02 crc kubenswrapper[4771]: E0129 09:28:02.036388 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a60492e-2744-495e-ac7b-d6ff1d970385" containerName="init" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.036395 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a60492e-2744-495e-ac7b-d6ff1d970385" containerName="init" Jan 29 09:28:02 crc kubenswrapper[4771]: E0129 09:28:02.036409 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3004cf2e-c4f0-45ba-a5f5-ade209c47247" containerName="kube-state-metrics" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.036417 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3004cf2e-c4f0-45ba-a5f5-ade209c47247" containerName="kube-state-metrics" Jan 29 09:28:02 crc kubenswrapper[4771]: E0129 09:28:02.036450 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a60492e-2744-495e-ac7b-d6ff1d970385" containerName="dnsmasq-dns" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.036456 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a60492e-2744-495e-ac7b-d6ff1d970385" containerName="dnsmasq-dns" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.036645 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="3004cf2e-c4f0-45ba-a5f5-ade209c47247" containerName="kube-state-metrics" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.036654 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a60492e-2744-495e-ac7b-d6ff1d970385" containerName="dnsmasq-dns" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.036662 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee2c782-d165-4a0b-bd83-9f506dd349b1" containerName="nova-manage" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.037356 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.037445 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.040244 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.040557 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.058361 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-combined-ca-bundle\") pod \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.058501 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-scripts\") pod \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.058646 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-config-data\") pod \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.058688 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdr7b\" (UniqueName: \"kubernetes.io/projected/9ee2c782-d165-4a0b-bd83-9f506dd349b1-kube-api-access-mdr7b\") pod \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\" (UID: \"9ee2c782-d165-4a0b-bd83-9f506dd349b1\") " Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.063206 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-scripts" (OuterVolumeSpecName: "scripts") pod "9ee2c782-d165-4a0b-bd83-9f506dd349b1" (UID: "9ee2c782-d165-4a0b-bd83-9f506dd349b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.076948 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee2c782-d165-4a0b-bd83-9f506dd349b1-kube-api-access-mdr7b" (OuterVolumeSpecName: "kube-api-access-mdr7b") pod "9ee2c782-d165-4a0b-bd83-9f506dd349b1" (UID: "9ee2c782-d165-4a0b-bd83-9f506dd349b1"). InnerVolumeSpecName "kube-api-access-mdr7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.081367 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.083765 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.095019 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-config-data" (OuterVolumeSpecName: "config-data") pod "9ee2c782-d165-4a0b-bd83-9f506dd349b1" (UID: "9ee2c782-d165-4a0b-bd83-9f506dd349b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.099960 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ee2c782-d165-4a0b-bd83-9f506dd349b1" (UID: "9ee2c782-d165-4a0b-bd83-9f506dd349b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.161223 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.161371 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.161471 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsq7d\" (UniqueName: \"kubernetes.io/projected/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-api-access-qsq7d\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.161539 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.161645 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.161659 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdr7b\" (UniqueName: \"kubernetes.io/projected/9ee2c782-d165-4a0b-bd83-9f506dd349b1-kube-api-access-mdr7b\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.161671 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.161720 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ee2c782-d165-4a0b-bd83-9f506dd349b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.265799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.266152 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.266200 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsq7d\" (UniqueName: \"kubernetes.io/projected/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-api-access-qsq7d\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.266230 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.270816 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.272617 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.273377 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.286861 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsq7d\" (UniqueName: \"kubernetes.io/projected/cd7da89d-e16f-404a-b1dd-ccaaa3069431-kube-api-access-qsq7d\") pod \"kube-state-metrics-0\" (UID: \"cd7da89d-e16f-404a-b1dd-ccaaa3069431\") " pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.363968 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.619086 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4dv79" event={"ID":"9ee2c782-d165-4a0b-bd83-9f506dd349b1","Type":"ContainerDied","Data":"2ce077596638b6680156f2a49ac6fdd5cef855f9be6eaa74aa299f96cc5fbd82"} Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.619401 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ce077596638b6680156f2a49ac6fdd5cef855f9be6eaa74aa299f96cc5fbd82" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.619490 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4dv79" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.622787 4771 generic.go:334] "Generic (PLEG): container finished" podID="6e458d2e-a019-469c-aee4-869073bfd47b" containerID="92df250051b5a4b3817ccfc9478c4f6d465f2e4078aff42e30781aa97c307fb5" exitCode=0 Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.622859 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" event={"ID":"6e458d2e-a019-469c-aee4-869073bfd47b","Type":"ContainerDied","Data":"92df250051b5a4b3817ccfc9478c4f6d465f2e4078aff42e30781aa97c307fb5"} Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.629607 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.629649 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756b74b74c-jfsqx" event={"ID":"4a60492e-2744-495e-ac7b-d6ff1d970385","Type":"ContainerDied","Data":"7f81c7f5ef331428a90b844872e19297fe7a6b355a49883aab0bdf0a5b763eb0"} Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.629689 4771 scope.go:117] "RemoveContainer" containerID="835b2559987e6b803ee9cd560a77532092d946e3b39333ae2360b378efe0c87d" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.763664 4771 scope.go:117] "RemoveContainer" containerID="93a5e12847183f0c52406a028207ef573966caceed7feedd08270e55dd988293" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.772950 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756b74b74c-jfsqx"] Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.784646 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-756b74b74c-jfsqx"] Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.807407 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.807688 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-log" containerID="cri-o://a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f" gracePeriod=30 Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.808974 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-api" containerID="cri-o://5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27" gracePeriod=30 Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.830362 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.876213 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3004cf2e-c4f0-45ba-a5f5-ade209c47247" path="/var/lib/kubelet/pods/3004cf2e-c4f0-45ba-a5f5-ade209c47247/volumes" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.877546 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a60492e-2744-495e-ac7b-d6ff1d970385" path="/var/lib/kubelet/pods/4a60492e-2744-495e-ac7b-d6ff1d970385/volumes" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.880151 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.880507 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerName="nova-metadata-log" containerID="cri-o://99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd" gracePeriod=30 Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.881201 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerName="nova-metadata-metadata" containerID="cri-o://3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978" gracePeriod=30 Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.896790 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.896890 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:28:02 crc kubenswrapper[4771]: I0129 09:28:02.914889 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:28:02 crc kubenswrapper[4771]: E0129 09:28:02.968875 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ee2c782_d165_4a0b_bd83_9f506dd349b1.slice/crio-2ce077596638b6680156f2a49ac6fdd5cef855f9be6eaa74aa299f96cc5fbd82\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cf48cb9_50ec_4565_a949_7175c133f3e7.slice/crio-a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a60492e_2744_495e_ac7b_d6ff1d970385.slice/crio-7f81c7f5ef331428a90b844872e19297fe7a6b355a49883aab0bdf0a5b763eb0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode17fb2e4_23e1_4c07_8e0f_e8ad998fcb3f.slice/crio-conmon-99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cf48cb9_50ec_4565_a949_7175c133f3e7.slice/crio-conmon-a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.139126 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.139798 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="ceilometer-central-agent" containerID="cri-o://631b200a269ea89babc1e706e0a8d3068a8b9c2ef95412efae012a6b85b84353" gracePeriod=30 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.139960 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="proxy-httpd" containerID="cri-o://042914a4791bcaa9f38ee7a6859e5ab9eccc3e641aecab635856a2ddcbd5c0f0" gracePeriod=30 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.140016 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="sg-core" containerID="cri-o://aae3234fdd87e267f0d537208708be9ac3820446382bdd862bc55875a4b02637" gracePeriod=30 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.140049 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="ceilometer-notification-agent" containerID="cri-o://2dbf0478281227b10ea315e001b8c032dc8f2bb132d6fdff02a680491ee537ce" gracePeriod=30 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.501892 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.597146 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-config-data\") pod \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.597189 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-logs\") pod \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.597314 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-combined-ca-bundle\") pod \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.597426 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-nova-metadata-tls-certs\") pod \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.597460 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwrpq\" (UniqueName: \"kubernetes.io/projected/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-kube-api-access-dwrpq\") pod \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\" (UID: \"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f\") " Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.599347 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-logs" (OuterVolumeSpecName: "logs") pod "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" (UID: "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.603947 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-kube-api-access-dwrpq" (OuterVolumeSpecName: "kube-api-access-dwrpq") pod "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" (UID: "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f"). InnerVolumeSpecName "kube-api-access-dwrpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.638850 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-config-data" (OuterVolumeSpecName: "config-data") pod "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" (UID: "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.649914 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" (UID: "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.656968 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" (UID: "e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.658197 4771 generic.go:334] "Generic (PLEG): container finished" podID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerID="a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f" exitCode=143 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.658497 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cf48cb9-50ec-4565-a949-7175c133f3e7","Type":"ContainerDied","Data":"a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.662762 4771 generic.go:334] "Generic (PLEG): container finished" podID="9da5d24e-199a-41db-a247-048e7fae22a1" containerID="042914a4791bcaa9f38ee7a6859e5ab9eccc3e641aecab635856a2ddcbd5c0f0" exitCode=0 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.663026 4771 generic.go:334] "Generic (PLEG): container finished" podID="9da5d24e-199a-41db-a247-048e7fae22a1" containerID="aae3234fdd87e267f0d537208708be9ac3820446382bdd862bc55875a4b02637" exitCode=2 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.663113 4771 generic.go:334] "Generic (PLEG): container finished" podID="9da5d24e-199a-41db-a247-048e7fae22a1" containerID="631b200a269ea89babc1e706e0a8d3068a8b9c2ef95412efae012a6b85b84353" exitCode=0 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.662871 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerDied","Data":"042914a4791bcaa9f38ee7a6859e5ab9eccc3e641aecab635856a2ddcbd5c0f0"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.663362 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerDied","Data":"aae3234fdd87e267f0d537208708be9ac3820446382bdd862bc55875a4b02637"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.663474 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerDied","Data":"631b200a269ea89babc1e706e0a8d3068a8b9c2ef95412efae012a6b85b84353"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.673888 4771 generic.go:334] "Generic (PLEG): container finished" podID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerID="3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978" exitCode=0 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.673925 4771 generic.go:334] "Generic (PLEG): container finished" podID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerID="99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd" exitCode=143 Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.673955 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f","Type":"ContainerDied","Data":"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.674009 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f","Type":"ContainerDied","Data":"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.674020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f","Type":"ContainerDied","Data":"a63dfb32cf61c649239044baf0361648876c842485f3678d344d76dbe21dfd42"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.674038 4771 scope.go:117] "RemoveContainer" containerID="3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.674409 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.690333 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cd7da89d-e16f-404a-b1dd-ccaaa3069431","Type":"ContainerStarted","Data":"83c63bb4fc9668d047d8888c8aa3c2dc1b468eae8d0aec888f2ba902f7c34352"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.691006 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cd7da89d-e16f-404a-b1dd-ccaaa3069431","Type":"ContainerStarted","Data":"5f45ee3702ff00b1fbaef4e776e0f4e1ddf30668da829b946f37aba6e39c01ea"} Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.702604 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.702646 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.702657 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.702666 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.702675 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwrpq\" (UniqueName: \"kubernetes.io/projected/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f-kube-api-access-dwrpq\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.712541 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.238941987 podStartE2EDuration="2.712498267s" podCreationTimestamp="2026-01-29 09:28:01 +0000 UTC" firstStartedPulling="2026-01-29 09:28:02.909332331 +0000 UTC m=+1303.032172558" lastFinishedPulling="2026-01-29 09:28:03.382888611 +0000 UTC m=+1303.505728838" observedRunningTime="2026-01-29 09:28:03.706562855 +0000 UTC m=+1303.829403082" watchObservedRunningTime="2026-01-29 09:28:03.712498267 +0000 UTC m=+1303.835338484" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.754997 4771 scope.go:117] "RemoveContainer" containerID="99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.769614 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.791388 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.796209 4771 scope.go:117] "RemoveContainer" containerID="3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978" Jan 29 09:28:03 crc kubenswrapper[4771]: E0129 09:28:03.797114 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978\": container with ID starting with 3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978 not found: ID does not exist" containerID="3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.797153 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978"} err="failed to get container status \"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978\": rpc error: code = NotFound desc = could not find container \"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978\": container with ID starting with 3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978 not found: ID does not exist" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.797180 4771 scope.go:117] "RemoveContainer" containerID="99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd" Jan 29 09:28:03 crc kubenswrapper[4771]: E0129 09:28:03.797647 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd\": container with ID starting with 99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd not found: ID does not exist" containerID="99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.797682 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd"} err="failed to get container status \"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd\": rpc error: code = NotFound desc = could not find container \"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd\": container with ID starting with 99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd not found: ID does not exist" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.797725 4771 scope.go:117] "RemoveContainer" containerID="3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.798929 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978"} err="failed to get container status \"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978\": rpc error: code = NotFound desc = could not find container \"3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978\": container with ID starting with 3236693bf60d96a690d9ef1bd5b34478a1f67e4c3d2c9f7c7726579a428ca978 not found: ID does not exist" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.798970 4771 scope.go:117] "RemoveContainer" containerID="99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.801305 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd"} err="failed to get container status \"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd\": rpc error: code = NotFound desc = could not find container \"99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd\": container with ID starting with 99efaaf32f44aa879d9b4bc885648c05e7e86bcd88e8513672cdf29f139586bd not found: ID does not exist" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.819771 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:03 crc kubenswrapper[4771]: E0129 09:28:03.820389 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerName="nova-metadata-log" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.820408 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerName="nova-metadata-log" Jan 29 09:28:03 crc kubenswrapper[4771]: E0129 09:28:03.820439 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerName="nova-metadata-metadata" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.820447 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerName="nova-metadata-metadata" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.820780 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerName="nova-metadata-metadata" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.820856 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" containerName="nova-metadata-log" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.822238 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.825234 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.825432 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.857002 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.907212 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-config-data\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.907263 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.907378 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-logs\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.907444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:03 crc kubenswrapper[4771]: I0129 09:28:03.907479 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6nb\" (UniqueName: \"kubernetes.io/projected/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-kube-api-access-8h6nb\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.009155 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-logs\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.009254 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.009289 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6nb\" (UniqueName: \"kubernetes.io/projected/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-kube-api-access-8h6nb\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.009372 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-config-data\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.009391 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.011204 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-logs\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.024350 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.024483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.025060 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-config-data\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.043575 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6nb\" (UniqueName: \"kubernetes.io/projected/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-kube-api-access-8h6nb\") pod \"nova-metadata-0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.127306 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.152341 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.213451 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-scripts\") pod \"6e458d2e-a019-469c-aee4-869073bfd47b\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.213594 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-combined-ca-bundle\") pod \"6e458d2e-a019-469c-aee4-869073bfd47b\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.213790 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-config-data\") pod \"6e458d2e-a019-469c-aee4-869073bfd47b\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.213847 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz95t\" (UniqueName: \"kubernetes.io/projected/6e458d2e-a019-469c-aee4-869073bfd47b-kube-api-access-wz95t\") pod \"6e458d2e-a019-469c-aee4-869073bfd47b\" (UID: \"6e458d2e-a019-469c-aee4-869073bfd47b\") " Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.217896 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-scripts" (OuterVolumeSpecName: "scripts") pod "6e458d2e-a019-469c-aee4-869073bfd47b" (UID: "6e458d2e-a019-469c-aee4-869073bfd47b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.218127 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e458d2e-a019-469c-aee4-869073bfd47b-kube-api-access-wz95t" (OuterVolumeSpecName: "kube-api-access-wz95t") pod "6e458d2e-a019-469c-aee4-869073bfd47b" (UID: "6e458d2e-a019-469c-aee4-869073bfd47b"). InnerVolumeSpecName "kube-api-access-wz95t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.240954 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e458d2e-a019-469c-aee4-869073bfd47b" (UID: "6e458d2e-a019-469c-aee4-869073bfd47b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.249274 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-config-data" (OuterVolumeSpecName: "config-data") pod "6e458d2e-a019-469c-aee4-869073bfd47b" (UID: "6e458d2e-a019-469c-aee4-869073bfd47b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.316155 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.316208 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.316220 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e458d2e-a019-469c-aee4-869073bfd47b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.316229 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz95t\" (UniqueName: \"kubernetes.io/projected/6e458d2e-a019-469c-aee4-869073bfd47b-kube-api-access-wz95t\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:04 crc kubenswrapper[4771]: W0129 09:28:04.634006 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb85eaeb5_6b2f_4381_bb2d_26877f2c8ff0.slice/crio-5f9a1db6d904637198a87ec95446f02c3518b6968016d6b6cd3225f97c383664 WatchSource:0}: Error finding container 5f9a1db6d904637198a87ec95446f02c3518b6968016d6b6cd3225f97c383664: Status 404 returned error can't find the container with id 5f9a1db6d904637198a87ec95446f02c3518b6968016d6b6cd3225f97c383664 Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.639927 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.710892 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" event={"ID":"6e458d2e-a019-469c-aee4-869073bfd47b","Type":"ContainerDied","Data":"9ad11d87ee266de1d421da8368e94fe442729754e43be766c49161861f932047"} Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.710941 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ad11d87ee266de1d421da8368e94fe442729754e43be766c49161861f932047" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.711015 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kj4q4" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.721783 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="612bc79f-9cad-4a4c-b575-1b3d5a6a6099" containerName="nova-scheduler-scheduler" containerID="cri-o://a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67" gracePeriod=30 Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.722175 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0","Type":"ContainerStarted","Data":"5f9a1db6d904637198a87ec95446f02c3518b6968016d6b6cd3225f97c383664"} Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.723137 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.740203 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:28:04 crc kubenswrapper[4771]: E0129 09:28:04.740877 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e458d2e-a019-469c-aee4-869073bfd47b" containerName="nova-cell1-conductor-db-sync" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.740946 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e458d2e-a019-469c-aee4-869073bfd47b" containerName="nova-cell1-conductor-db-sync" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.741247 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e458d2e-a019-469c-aee4-869073bfd47b" containerName="nova-cell1-conductor-db-sync" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.741998 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.748117 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.755307 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.829570 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03fe6bc4-290b-46aa-b934-4d6849586b41-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.829991 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03fe6bc4-290b-46aa-b934-4d6849586b41-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.830333 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbplf\" (UniqueName: \"kubernetes.io/projected/03fe6bc4-290b-46aa-b934-4d6849586b41-kube-api-access-pbplf\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.850806 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f" path="/var/lib/kubelet/pods/e17fb2e4-23e1-4c07-8e0f-e8ad998fcb3f/volumes" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.932753 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbplf\" (UniqueName: \"kubernetes.io/projected/03fe6bc4-290b-46aa-b934-4d6849586b41-kube-api-access-pbplf\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.933309 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03fe6bc4-290b-46aa-b934-4d6849586b41-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.933507 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03fe6bc4-290b-46aa-b934-4d6849586b41-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.937853 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03fe6bc4-290b-46aa-b934-4d6849586b41-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.943067 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03fe6bc4-290b-46aa-b934-4d6849586b41-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:04 crc kubenswrapper[4771]: I0129 09:28:04.950032 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbplf\" (UniqueName: \"kubernetes.io/projected/03fe6bc4-290b-46aa-b934-4d6849586b41-kube-api-access-pbplf\") pod \"nova-cell1-conductor-0\" (UID: \"03fe6bc4-290b-46aa-b934-4d6849586b41\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:05 crc kubenswrapper[4771]: I0129 09:28:05.116771 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:05 crc kubenswrapper[4771]: E0129 09:28:05.551939 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:28:05 crc kubenswrapper[4771]: E0129 09:28:05.554165 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:28:05 crc kubenswrapper[4771]: E0129 09:28:05.555835 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:28:05 crc kubenswrapper[4771]: E0129 09:28:05.555869 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="612bc79f-9cad-4a4c-b575-1b3d5a6a6099" containerName="nova-scheduler-scheduler" Jan 29 09:28:05 crc kubenswrapper[4771]: I0129 09:28:05.627198 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:28:05 crc kubenswrapper[4771]: W0129 09:28:05.643150 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03fe6bc4_290b_46aa_b934_4d6849586b41.slice/crio-d6b2e7d5331b95097ec78f90427193b960e91b5100d94492fbd53ac2a0acca59 WatchSource:0}: Error finding container d6b2e7d5331b95097ec78f90427193b960e91b5100d94492fbd53ac2a0acca59: Status 404 returned error can't find the container with id d6b2e7d5331b95097ec78f90427193b960e91b5100d94492fbd53ac2a0acca59 Jan 29 09:28:05 crc kubenswrapper[4771]: I0129 09:28:05.737723 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"03fe6bc4-290b-46aa-b934-4d6849586b41","Type":"ContainerStarted","Data":"d6b2e7d5331b95097ec78f90427193b960e91b5100d94492fbd53ac2a0acca59"} Jan 29 09:28:05 crc kubenswrapper[4771]: I0129 09:28:05.741499 4771 generic.go:334] "Generic (PLEG): container finished" podID="9da5d24e-199a-41db-a247-048e7fae22a1" containerID="2dbf0478281227b10ea315e001b8c032dc8f2bb132d6fdff02a680491ee537ce" exitCode=0 Jan 29 09:28:05 crc kubenswrapper[4771]: I0129 09:28:05.741564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerDied","Data":"2dbf0478281227b10ea315e001b8c032dc8f2bb132d6fdff02a680491ee537ce"} Jan 29 09:28:05 crc kubenswrapper[4771]: I0129 09:28:05.745287 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0","Type":"ContainerStarted","Data":"83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34"} Jan 29 09:28:05 crc kubenswrapper[4771]: I0129 09:28:05.745331 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0","Type":"ContainerStarted","Data":"1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b"} Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.108266 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.136856 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.13680366 podStartE2EDuration="3.13680366s" podCreationTimestamp="2026-01-29 09:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:05.777886033 +0000 UTC m=+1305.900726260" watchObservedRunningTime="2026-01-29 09:28:06.13680366 +0000 UTC m=+1306.259643897" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.164166 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6zl6\" (UniqueName: \"kubernetes.io/projected/9da5d24e-199a-41db-a247-048e7fae22a1-kube-api-access-v6zl6\") pod \"9da5d24e-199a-41db-a247-048e7fae22a1\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.164242 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-sg-core-conf-yaml\") pod \"9da5d24e-199a-41db-a247-048e7fae22a1\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.164277 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-log-httpd\") pod \"9da5d24e-199a-41db-a247-048e7fae22a1\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.164303 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-config-data\") pod \"9da5d24e-199a-41db-a247-048e7fae22a1\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.164542 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-run-httpd\") pod \"9da5d24e-199a-41db-a247-048e7fae22a1\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.164601 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-scripts\") pod \"9da5d24e-199a-41db-a247-048e7fae22a1\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.164645 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-combined-ca-bundle\") pod \"9da5d24e-199a-41db-a247-048e7fae22a1\" (UID: \"9da5d24e-199a-41db-a247-048e7fae22a1\") " Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.165015 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9da5d24e-199a-41db-a247-048e7fae22a1" (UID: "9da5d24e-199a-41db-a247-048e7fae22a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.165288 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9da5d24e-199a-41db-a247-048e7fae22a1" (UID: "9da5d24e-199a-41db-a247-048e7fae22a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.165764 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.165779 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9da5d24e-199a-41db-a247-048e7fae22a1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.173096 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-scripts" (OuterVolumeSpecName: "scripts") pod "9da5d24e-199a-41db-a247-048e7fae22a1" (UID: "9da5d24e-199a-41db-a247-048e7fae22a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.173927 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da5d24e-199a-41db-a247-048e7fae22a1-kube-api-access-v6zl6" (OuterVolumeSpecName: "kube-api-access-v6zl6") pod "9da5d24e-199a-41db-a247-048e7fae22a1" (UID: "9da5d24e-199a-41db-a247-048e7fae22a1"). InnerVolumeSpecName "kube-api-access-v6zl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.206082 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9da5d24e-199a-41db-a247-048e7fae22a1" (UID: "9da5d24e-199a-41db-a247-048e7fae22a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.260002 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9da5d24e-199a-41db-a247-048e7fae22a1" (UID: "9da5d24e-199a-41db-a247-048e7fae22a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.268169 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.268209 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.268223 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6zl6\" (UniqueName: \"kubernetes.io/projected/9da5d24e-199a-41db-a247-048e7fae22a1-kube-api-access-v6zl6\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.268234 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.297337 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-config-data" (OuterVolumeSpecName: "config-data") pod "9da5d24e-199a-41db-a247-048e7fae22a1" (UID: "9da5d24e-199a-41db-a247-048e7fae22a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.370429 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da5d24e-199a-41db-a247-048e7fae22a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.754688 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"03fe6bc4-290b-46aa-b934-4d6849586b41","Type":"ContainerStarted","Data":"11bd28eca6cf9855406246f55f8798d6e5894312c7ddffa300d5887da9894231"} Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.754850 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.759300 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9da5d24e-199a-41db-a247-048e7fae22a1","Type":"ContainerDied","Data":"7f5562a9150d9a96237103c51bca4734f26a68e6efbfa142319c84604de2c800"} Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.759357 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.759374 4771 scope.go:117] "RemoveContainer" containerID="042914a4791bcaa9f38ee7a6859e5ab9eccc3e641aecab635856a2ddcbd5c0f0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.784003 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.783979534 podStartE2EDuration="2.783979534s" podCreationTimestamp="2026-01-29 09:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:06.774767122 +0000 UTC m=+1306.897607369" watchObservedRunningTime="2026-01-29 09:28:06.783979534 +0000 UTC m=+1306.906819771" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.801687 4771 scope.go:117] "RemoveContainer" containerID="aae3234fdd87e267f0d537208708be9ac3820446382bdd862bc55875a4b02637" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.816416 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.833765 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.852011 4771 scope.go:117] "RemoveContainer" containerID="2dbf0478281227b10ea315e001b8c032dc8f2bb132d6fdff02a680491ee537ce" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.859503 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" path="/var/lib/kubelet/pods/9da5d24e-199a-41db-a247-048e7fae22a1/volumes" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.860247 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:06 crc kubenswrapper[4771]: E0129 09:28:06.863023 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="ceilometer-notification-agent" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.863051 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="ceilometer-notification-agent" Jan 29 09:28:06 crc kubenswrapper[4771]: E0129 09:28:06.863083 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="sg-core" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.863091 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="sg-core" Jan 29 09:28:06 crc kubenswrapper[4771]: E0129 09:28:06.863108 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="proxy-httpd" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.863113 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="proxy-httpd" Jan 29 09:28:06 crc kubenswrapper[4771]: E0129 09:28:06.863127 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="ceilometer-central-agent" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.863133 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="ceilometer-central-agent" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.863348 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="sg-core" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.863386 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="ceilometer-central-agent" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.863398 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="ceilometer-notification-agent" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.863407 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da5d24e-199a-41db-a247-048e7fae22a1" containerName="proxy-httpd" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.865449 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.865553 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.883663 4771 scope.go:117] "RemoveContainer" containerID="631b200a269ea89babc1e706e0a8d3068a8b9c2ef95412efae012a6b85b84353" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.884281 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.884474 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.884616 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.989669 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-log-httpd\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.989834 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.989862 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.989903 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-config-data\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.989971 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-scripts\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.990015 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.990126 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg87l\" (UniqueName: \"kubernetes.io/projected/b04c3ea3-b31d-48c1-a088-17356009774e-kube-api-access-hg87l\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:06 crc kubenswrapper[4771]: I0129 09:28:06.990146 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-run-httpd\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.092501 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-log-httpd\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.092588 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.092610 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.092648 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-config-data\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.092720 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-scripts\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.092752 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.092816 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg87l\" (UniqueName: \"kubernetes.io/projected/b04c3ea3-b31d-48c1-a088-17356009774e-kube-api-access-hg87l\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.092839 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-run-httpd\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.093341 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-run-httpd\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.093621 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-log-httpd\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.098363 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.099154 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.100984 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-scripts\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.108374 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-config-data\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.113301 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.114029 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg87l\" (UniqueName: \"kubernetes.io/projected/b04c3ea3-b31d-48c1-a088-17356009774e-kube-api-access-hg87l\") pod \"ceilometer-0\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.209195 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.679946 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:07 crc kubenswrapper[4771]: W0129 09:28:07.682034 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb04c3ea3_b31d_48c1_a088_17356009774e.slice/crio-b376361fe041f3703a5a01fe88b5f88298705a643b809f5f9dfc0db5d5c67953 WatchSource:0}: Error finding container b376361fe041f3703a5a01fe88b5f88298705a643b809f5f9dfc0db5d5c67953: Status 404 returned error can't find the container with id b376361fe041f3703a5a01fe88b5f88298705a643b809f5f9dfc0db5d5c67953 Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.772247 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.775749 4771 generic.go:334] "Generic (PLEG): container finished" podID="612bc79f-9cad-4a4c-b575-1b3d5a6a6099" containerID="a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67" exitCode=0 Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.775817 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.775846 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"612bc79f-9cad-4a4c-b575-1b3d5a6a6099","Type":"ContainerDied","Data":"a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67"} Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.775902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"612bc79f-9cad-4a4c-b575-1b3d5a6a6099","Type":"ContainerDied","Data":"3d6c51fcf71ce376ba548e8af67e7097a8f77ef17039dd2c624214e88b45ce4c"} Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.775923 4771 scope.go:117] "RemoveContainer" containerID="a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.780249 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerStarted","Data":"b376361fe041f3703a5a01fe88b5f88298705a643b809f5f9dfc0db5d5c67953"} Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.812094 4771 scope.go:117] "RemoveContainer" containerID="a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67" Jan 29 09:28:07 crc kubenswrapper[4771]: E0129 09:28:07.813257 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67\": container with ID starting with a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67 not found: ID does not exist" containerID="a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.813316 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67"} err="failed to get container status \"a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67\": rpc error: code = NotFound desc = could not find container \"a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67\": container with ID starting with a558eff6dfea3eaa0518db4c95d4579ef78d0005cd2527fbdc7231da85b0fb67 not found: ID does not exist" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.823321 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-combined-ca-bundle\") pod \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.823384 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgs47\" (UniqueName: \"kubernetes.io/projected/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-kube-api-access-fgs47\") pod \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.823448 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-config-data\") pod \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\" (UID: \"612bc79f-9cad-4a4c-b575-1b3d5a6a6099\") " Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.831440 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-kube-api-access-fgs47" (OuterVolumeSpecName: "kube-api-access-fgs47") pod "612bc79f-9cad-4a4c-b575-1b3d5a6a6099" (UID: "612bc79f-9cad-4a4c-b575-1b3d5a6a6099"). InnerVolumeSpecName "kube-api-access-fgs47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.856906 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-config-data" (OuterVolumeSpecName: "config-data") pod "612bc79f-9cad-4a4c-b575-1b3d5a6a6099" (UID: "612bc79f-9cad-4a4c-b575-1b3d5a6a6099"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.867878 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "612bc79f-9cad-4a4c-b575-1b3d5a6a6099" (UID: "612bc79f-9cad-4a4c-b575-1b3d5a6a6099"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.925406 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.925440 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:07 crc kubenswrapper[4771]: I0129 09:28:07.925455 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgs47\" (UniqueName: \"kubernetes.io/projected/612bc79f-9cad-4a4c-b575-1b3d5a6a6099-kube-api-access-fgs47\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.126594 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.137510 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.165282 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:08 crc kubenswrapper[4771]: E0129 09:28:08.165943 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612bc79f-9cad-4a4c-b575-1b3d5a6a6099" containerName="nova-scheduler-scheduler" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.165973 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="612bc79f-9cad-4a4c-b575-1b3d5a6a6099" containerName="nova-scheduler-scheduler" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.166194 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="612bc79f-9cad-4a4c-b575-1b3d5a6a6099" containerName="nova-scheduler-scheduler" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.167035 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.172993 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.179923 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.337182 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-config-data\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.337255 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.337375 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdp9t\" (UniqueName: \"kubernetes.io/projected/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-kube-api-access-wdp9t\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.439314 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-config-data\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.439770 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.439914 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdp9t\" (UniqueName: \"kubernetes.io/projected/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-kube-api-access-wdp9t\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.445592 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-config-data\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.448181 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.472244 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdp9t\" (UniqueName: \"kubernetes.io/projected/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-kube-api-access-wdp9t\") pod \"nova-scheduler-0\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.497487 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.713737 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.800544 4771 generic.go:334] "Generic (PLEG): container finished" podID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerID="5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27" exitCode=0 Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.800651 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.801486 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cf48cb9-50ec-4565-a949-7175c133f3e7","Type":"ContainerDied","Data":"5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27"} Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.801513 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cf48cb9-50ec-4565-a949-7175c133f3e7","Type":"ContainerDied","Data":"c58c435400fe024e3f1438bf224ea335cc2f9aaa34475deae77336b8bf3a11a4"} Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.801529 4771 scope.go:117] "RemoveContainer" containerID="5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.804481 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerStarted","Data":"571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce"} Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.828067 4771 scope.go:117] "RemoveContainer" containerID="a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.853520 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612bc79f-9cad-4a4c-b575-1b3d5a6a6099" path="/var/lib/kubelet/pods/612bc79f-9cad-4a4c-b575-1b3d5a6a6099/volumes" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.859608 4771 scope.go:117] "RemoveContainer" containerID="5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27" Jan 29 09:28:08 crc kubenswrapper[4771]: E0129 09:28:08.860217 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27\": container with ID starting with 5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27 not found: ID does not exist" containerID="5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.860251 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27"} err="failed to get container status \"5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27\": rpc error: code = NotFound desc = could not find container \"5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27\": container with ID starting with 5cb3771cec838175127ab1097d5d240abcf28155ddfa711fae104cc99c714c27 not found: ID does not exist" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.860283 4771 scope.go:117] "RemoveContainer" containerID="a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f" Jan 29 09:28:08 crc kubenswrapper[4771]: E0129 09:28:08.861861 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f\": container with ID starting with a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f not found: ID does not exist" containerID="a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.861906 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f"} err="failed to get container status \"a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f\": rpc error: code = NotFound desc = could not find container \"a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f\": container with ID starting with a52a2af3c5b65feabb37ca2265ccaca9a4d2c147a7264d0d9dd75af47cae112f not found: ID does not exist" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.866485 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-config-data\") pod \"2cf48cb9-50ec-4565-a949-7175c133f3e7\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.866563 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-combined-ca-bundle\") pod \"2cf48cb9-50ec-4565-a949-7175c133f3e7\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.866630 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cf48cb9-50ec-4565-a949-7175c133f3e7-logs\") pod \"2cf48cb9-50ec-4565-a949-7175c133f3e7\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.866658 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cpq5\" (UniqueName: \"kubernetes.io/projected/2cf48cb9-50ec-4565-a949-7175c133f3e7-kube-api-access-6cpq5\") pod \"2cf48cb9-50ec-4565-a949-7175c133f3e7\" (UID: \"2cf48cb9-50ec-4565-a949-7175c133f3e7\") " Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.868241 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf48cb9-50ec-4565-a949-7175c133f3e7-logs" (OuterVolumeSpecName: "logs") pod "2cf48cb9-50ec-4565-a949-7175c133f3e7" (UID: "2cf48cb9-50ec-4565-a949-7175c133f3e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.871929 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf48cb9-50ec-4565-a949-7175c133f3e7-kube-api-access-6cpq5" (OuterVolumeSpecName: "kube-api-access-6cpq5") pod "2cf48cb9-50ec-4565-a949-7175c133f3e7" (UID: "2cf48cb9-50ec-4565-a949-7175c133f3e7"). InnerVolumeSpecName "kube-api-access-6cpq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.898232 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cf48cb9-50ec-4565-a949-7175c133f3e7" (UID: "2cf48cb9-50ec-4565-a949-7175c133f3e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.899487 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-config-data" (OuterVolumeSpecName: "config-data") pod "2cf48cb9-50ec-4565-a949-7175c133f3e7" (UID: "2cf48cb9-50ec-4565-a949-7175c133f3e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.969215 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.969248 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cf48cb9-50ec-4565-a949-7175c133f3e7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.969259 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cpq5\" (UniqueName: \"kubernetes.io/projected/2cf48cb9-50ec-4565-a949-7175c133f3e7-kube-api-access-6cpq5\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:08 crc kubenswrapper[4771]: I0129 09:28:08.969269 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf48cb9-50ec-4565-a949-7175c133f3e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:09 crc kubenswrapper[4771]: W0129 09:28:09.091425 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47eb9e90_ff3d_48ff_aa25_9003cb52fc58.slice/crio-2262dca0b76bc1a5539a9bae6acfc6ea90572fd5085aa4883ee7fd8f064c5ade WatchSource:0}: Error finding container 2262dca0b76bc1a5539a9bae6acfc6ea90572fd5085aa4883ee7fd8f064c5ade: Status 404 returned error can't find the container with id 2262dca0b76bc1a5539a9bae6acfc6ea90572fd5085aa4883ee7fd8f064c5ade Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.104992 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.155837 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.155934 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.167005 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.176058 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.192182 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:09 crc kubenswrapper[4771]: E0129 09:28:09.192807 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-log" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.192824 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-log" Jan 29 09:28:09 crc kubenswrapper[4771]: E0129 09:28:09.192861 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-api" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.192872 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-api" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.193121 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-log" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.193151 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" containerName="nova-api-api" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.195497 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.198861 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.202119 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.379545 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqw9r\" (UniqueName: \"kubernetes.io/projected/b25b1322-303c-4e94-b49d-d0902b615905-kube-api-access-xqw9r\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.379630 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.379780 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-config-data\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.379810 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b25b1322-303c-4e94-b49d-d0902b615905-logs\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.481265 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqw9r\" (UniqueName: \"kubernetes.io/projected/b25b1322-303c-4e94-b49d-d0902b615905-kube-api-access-xqw9r\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.481333 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.481407 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-config-data\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.481424 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b25b1322-303c-4e94-b49d-d0902b615905-logs\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.481886 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b25b1322-303c-4e94-b49d-d0902b615905-logs\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.485614 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-config-data\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.486163 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.498126 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqw9r\" (UniqueName: \"kubernetes.io/projected/b25b1322-303c-4e94-b49d-d0902b615905-kube-api-access-xqw9r\") pod \"nova-api-0\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.517726 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.831185 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerStarted","Data":"89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12"} Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.837571 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47eb9e90-ff3d-48ff-aa25-9003cb52fc58","Type":"ContainerStarted","Data":"16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7"} Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.837615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47eb9e90-ff3d-48ff-aa25-9003cb52fc58","Type":"ContainerStarted","Data":"2262dca0b76bc1a5539a9bae6acfc6ea90572fd5085aa4883ee7fd8f064c5ade"} Jan 29 09:28:09 crc kubenswrapper[4771]: I0129 09:28:09.862761 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.86273914 podStartE2EDuration="1.86273914s" podCreationTimestamp="2026-01-29 09:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:09.855155003 +0000 UTC m=+1309.977995240" watchObservedRunningTime="2026-01-29 09:28:09.86273914 +0000 UTC m=+1309.985579367" Jan 29 09:28:10 crc kubenswrapper[4771]: W0129 09:28:10.005859 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb25b1322_303c_4e94_b49d_d0902b615905.slice/crio-a04cbd6a295f1d7411ac71f3db4c45dae27e2c6a2ea52540e9b89eb5902b5ad0 WatchSource:0}: Error finding container a04cbd6a295f1d7411ac71f3db4c45dae27e2c6a2ea52540e9b89eb5902b5ad0: Status 404 returned error can't find the container with id a04cbd6a295f1d7411ac71f3db4c45dae27e2c6a2ea52540e9b89eb5902b5ad0 Jan 29 09:28:10 crc kubenswrapper[4771]: I0129 09:28:10.012409 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:10 crc kubenswrapper[4771]: I0129 09:28:10.157995 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 09:28:10 crc kubenswrapper[4771]: I0129 09:28:10.852757 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf48cb9-50ec-4565-a949-7175c133f3e7" path="/var/lib/kubelet/pods/2cf48cb9-50ec-4565-a949-7175c133f3e7/volumes" Jan 29 09:28:10 crc kubenswrapper[4771]: I0129 09:28:10.854918 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b25b1322-303c-4e94-b49d-d0902b615905","Type":"ContainerStarted","Data":"75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f"} Jan 29 09:28:10 crc kubenswrapper[4771]: I0129 09:28:10.854956 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b25b1322-303c-4e94-b49d-d0902b615905","Type":"ContainerStarted","Data":"98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507"} Jan 29 09:28:10 crc kubenswrapper[4771]: I0129 09:28:10.854966 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b25b1322-303c-4e94-b49d-d0902b615905","Type":"ContainerStarted","Data":"a04cbd6a295f1d7411ac71f3db4c45dae27e2c6a2ea52540e9b89eb5902b5ad0"} Jan 29 09:28:10 crc kubenswrapper[4771]: I0129 09:28:10.858214 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerStarted","Data":"93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2"} Jan 29 09:28:10 crc kubenswrapper[4771]: I0129 09:28:10.915749 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.915722282 podStartE2EDuration="1.915722282s" podCreationTimestamp="2026-01-29 09:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:10.907766675 +0000 UTC m=+1311.030606912" watchObservedRunningTime="2026-01-29 09:28:10.915722282 +0000 UTC m=+1311.038562509" Jan 29 09:28:12 crc kubenswrapper[4771]: I0129 09:28:12.380203 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 09:28:12 crc kubenswrapper[4771]: I0129 09:28:12.878143 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerStarted","Data":"e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341"} Jan 29 09:28:12 crc kubenswrapper[4771]: I0129 09:28:12.878327 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:28:12 crc kubenswrapper[4771]: I0129 09:28:12.903908 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.808543715 podStartE2EDuration="6.903876908s" podCreationTimestamp="2026-01-29 09:28:06 +0000 UTC" firstStartedPulling="2026-01-29 09:28:07.684967344 +0000 UTC m=+1307.807807571" lastFinishedPulling="2026-01-29 09:28:11.780300537 +0000 UTC m=+1311.903140764" observedRunningTime="2026-01-29 09:28:12.896519047 +0000 UTC m=+1313.019359274" watchObservedRunningTime="2026-01-29 09:28:12.903876908 +0000 UTC m=+1313.026717125" Jan 29 09:28:13 crc kubenswrapper[4771]: I0129 09:28:13.499436 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:28:14 crc kubenswrapper[4771]: I0129 09:28:14.152713 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:28:14 crc kubenswrapper[4771]: I0129 09:28:14.155273 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:28:15 crc kubenswrapper[4771]: I0129 09:28:15.163856 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:28:15 crc kubenswrapper[4771]: I0129 09:28:15.163854 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:28:18 crc kubenswrapper[4771]: I0129 09:28:18.499675 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:28:18 crc kubenswrapper[4771]: I0129 09:28:18.529998 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:28:18 crc kubenswrapper[4771]: I0129 09:28:18.971132 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:28:19 crc kubenswrapper[4771]: I0129 09:28:19.518606 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:28:19 crc kubenswrapper[4771]: I0129 09:28:19.519736 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:28:20 crc kubenswrapper[4771]: I0129 09:28:20.601937 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:28:20 crc kubenswrapper[4771]: I0129 09:28:20.602152 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:28:24 crc kubenswrapper[4771]: I0129 09:28:24.158739 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:28:24 crc kubenswrapper[4771]: I0129 09:28:24.160853 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:28:24 crc kubenswrapper[4771]: I0129 09:28:24.166485 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:28:25 crc kubenswrapper[4771]: I0129 09:28:25.012656 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:28:25 crc kubenswrapper[4771]: I0129 09:28:25.896283 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:25 crc kubenswrapper[4771]: I0129 09:28:25.968905 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvf7v\" (UniqueName: \"kubernetes.io/projected/be7dbf88-c4fe-4169-8794-d2f6880bdf07-kube-api-access-fvf7v\") pod \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " Jan 29 09:28:25 crc kubenswrapper[4771]: I0129 09:28:25.968991 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-combined-ca-bundle\") pod \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " Jan 29 09:28:25 crc kubenswrapper[4771]: I0129 09:28:25.969049 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-config-data\") pod \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\" (UID: \"be7dbf88-c4fe-4169-8794-d2f6880bdf07\") " Jan 29 09:28:25 crc kubenswrapper[4771]: I0129 09:28:25.974356 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7dbf88-c4fe-4169-8794-d2f6880bdf07-kube-api-access-fvf7v" (OuterVolumeSpecName: "kube-api-access-fvf7v") pod "be7dbf88-c4fe-4169-8794-d2f6880bdf07" (UID: "be7dbf88-c4fe-4169-8794-d2f6880bdf07"). InnerVolumeSpecName "kube-api-access-fvf7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.001023 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-config-data" (OuterVolumeSpecName: "config-data") pod "be7dbf88-c4fe-4169-8794-d2f6880bdf07" (UID: "be7dbf88-c4fe-4169-8794-d2f6880bdf07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.001160 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be7dbf88-c4fe-4169-8794-d2f6880bdf07" (UID: "be7dbf88-c4fe-4169-8794-d2f6880bdf07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.022105 4771 generic.go:334] "Generic (PLEG): container finished" podID="be7dbf88-c4fe-4169-8794-d2f6880bdf07" containerID="a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc" exitCode=137 Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.022195 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.022218 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"be7dbf88-c4fe-4169-8794-d2f6880bdf07","Type":"ContainerDied","Data":"a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc"} Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.023230 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"be7dbf88-c4fe-4169-8794-d2f6880bdf07","Type":"ContainerDied","Data":"3dcfc678bfa60db8596350d24d52dbb208b7e1a8679afc13084db3402845c6bb"} Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.023261 4771 scope.go:117] "RemoveContainer" containerID="a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.071549 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.071978 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7dbf88-c4fe-4169-8794-d2f6880bdf07-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.072075 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvf7v\" (UniqueName: \"kubernetes.io/projected/be7dbf88-c4fe-4169-8794-d2f6880bdf07-kube-api-access-fvf7v\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.082905 4771 scope.go:117] "RemoveContainer" containerID="a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc" Jan 29 09:28:26 crc kubenswrapper[4771]: E0129 09:28:26.083591 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc\": container with ID starting with a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc not found: ID does not exist" containerID="a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.083766 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc"} err="failed to get container status \"a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc\": rpc error: code = NotFound desc = could not find container \"a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc\": container with ID starting with a1dcda504d98078421fc2224a9e03971511ad9d22198e7adbbd09cf6285a09cc not found: ID does not exist" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.100412 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.115647 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.181281 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:28:26 crc kubenswrapper[4771]: E0129 09:28:26.182151 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7dbf88-c4fe-4169-8794-d2f6880bdf07" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.182236 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7dbf88-c4fe-4169-8794-d2f6880bdf07" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.182464 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7dbf88-c4fe-4169-8794-d2f6880bdf07" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.183296 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.185532 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.188881 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.189522 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.214772 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.285346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.285440 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.285491 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt9x9\" (UniqueName: \"kubernetes.io/projected/38681ed7-61ea-42cb-b2bf-8deba8d236bf-kube-api-access-kt9x9\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.285520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.285553 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.388008 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt9x9\" (UniqueName: \"kubernetes.io/projected/38681ed7-61ea-42cb-b2bf-8deba8d236bf-kube-api-access-kt9x9\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.388430 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.388635 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.388871 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.389176 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.393312 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.393454 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.393958 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.393465 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38681ed7-61ea-42cb-b2bf-8deba8d236bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.405021 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt9x9\" (UniqueName: \"kubernetes.io/projected/38681ed7-61ea-42cb-b2bf-8deba8d236bf-kube-api-access-kt9x9\") pod \"nova-cell1-novncproxy-0\" (UID: \"38681ed7-61ea-42cb-b2bf-8deba8d236bf\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.502780 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.850434 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7dbf88-c4fe-4169-8794-d2f6880bdf07" path="/var/lib/kubelet/pods/be7dbf88-c4fe-4169-8794-d2f6880bdf07/volumes" Jan 29 09:28:26 crc kubenswrapper[4771]: I0129 09:28:26.960430 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:28:26 crc kubenswrapper[4771]: W0129 09:28:26.964044 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38681ed7_61ea_42cb_b2bf_8deba8d236bf.slice/crio-67f792f39a0f7669f490436673b041be81287e98c20b85c069b209479878d078 WatchSource:0}: Error finding container 67f792f39a0f7669f490436673b041be81287e98c20b85c069b209479878d078: Status 404 returned error can't find the container with id 67f792f39a0f7669f490436673b041be81287e98c20b85c069b209479878d078 Jan 29 09:28:27 crc kubenswrapper[4771]: I0129 09:28:27.033916 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"38681ed7-61ea-42cb-b2bf-8deba8d236bf","Type":"ContainerStarted","Data":"67f792f39a0f7669f490436673b041be81287e98c20b85c069b209479878d078"} Jan 29 09:28:28 crc kubenswrapper[4771]: I0129 09:28:28.044596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"38681ed7-61ea-42cb-b2bf-8deba8d236bf","Type":"ContainerStarted","Data":"2634205c0cfb52a59ec55a77c2f4d149b53818cc0b7669aeb2ce42bd0f33ef62"} Jan 29 09:28:28 crc kubenswrapper[4771]: I0129 09:28:28.097555 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.097523359 podStartE2EDuration="2.097523359s" podCreationTimestamp="2026-01-29 09:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:28.058917404 +0000 UTC m=+1328.181757631" watchObservedRunningTime="2026-01-29 09:28:28.097523359 +0000 UTC m=+1328.220363596" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.522573 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.523002 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.523799 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.523841 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.529395 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.529793 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.713420 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fcbbbc747-4d9v4"] Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.716243 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.731455 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fcbbbc747-4d9v4"] Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.848614 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.848962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.849161 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8plk\" (UniqueName: \"kubernetes.io/projected/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-kube-api-access-c8plk\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.849214 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.849246 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-svc\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.849284 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-config\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.951392 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8plk\" (UniqueName: \"kubernetes.io/projected/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-kube-api-access-c8plk\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.951463 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.951494 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-svc\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.951520 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-config\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.951581 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.951617 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.952546 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-svc\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.952618 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-config\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.952762 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.953056 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.953277 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:29 crc kubenswrapper[4771]: I0129 09:28:29.984632 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8plk\" (UniqueName: \"kubernetes.io/projected/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-kube-api-access-c8plk\") pod \"dnsmasq-dns-5fcbbbc747-4d9v4\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:30 crc kubenswrapper[4771]: I0129 09:28:30.062296 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:30 crc kubenswrapper[4771]: I0129 09:28:30.597387 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fcbbbc747-4d9v4"] Jan 29 09:28:30 crc kubenswrapper[4771]: W0129 09:28:30.598399 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bcafb82_4d5d_4bf1_8739_2cbc6024e5e6.slice/crio-22313b0fa27453821c0cf1b0b7552af6171fac04618502865a3f865d609e28f2 WatchSource:0}: Error finding container 22313b0fa27453821c0cf1b0b7552af6171fac04618502865a3f865d609e28f2: Status 404 returned error can't find the container with id 22313b0fa27453821c0cf1b0b7552af6171fac04618502865a3f865d609e28f2 Jan 29 09:28:31 crc kubenswrapper[4771]: I0129 09:28:31.083987 4771 generic.go:334] "Generic (PLEG): container finished" podID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" containerID="9dbd6eaf1f88ff35516391d2b216165630bc9ff3f19b70a006fc09f46b25a940" exitCode=0 Jan 29 09:28:31 crc kubenswrapper[4771]: I0129 09:28:31.084254 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" event={"ID":"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6","Type":"ContainerDied","Data":"9dbd6eaf1f88ff35516391d2b216165630bc9ff3f19b70a006fc09f46b25a940"} Jan 29 09:28:31 crc kubenswrapper[4771]: I0129 09:28:31.084426 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" event={"ID":"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6","Type":"ContainerStarted","Data":"22313b0fa27453821c0cf1b0b7552af6171fac04618502865a3f865d609e28f2"} Jan 29 09:28:31 crc kubenswrapper[4771]: I0129 09:28:31.503892 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.009509 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.009875 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="ceilometer-central-agent" containerID="cri-o://571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce" gracePeriod=30 Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.009931 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="sg-core" containerID="cri-o://93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2" gracePeriod=30 Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.010001 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="ceilometer-notification-agent" containerID="cri-o://89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12" gracePeriod=30 Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.009931 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="proxy-httpd" containerID="cri-o://e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341" gracePeriod=30 Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.018390 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": read tcp 10.217.0.2:36616->10.217.0.201:3000: read: connection reset by peer" Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.114889 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" event={"ID":"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6","Type":"ContainerStarted","Data":"9642996952f35979ec9f4d4521e88ca03df13454a59ba8bc0cdf93c623d8b39a"} Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.116048 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.141145 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" podStartSLOduration=3.141124969 podStartE2EDuration="3.141124969s" podCreationTimestamp="2026-01-29 09:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:32.138873757 +0000 UTC m=+1332.261713994" watchObservedRunningTime="2026-01-29 09:28:32.141124969 +0000 UTC m=+1332.263965196" Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.500014 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.500278 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-log" containerID="cri-o://98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507" gracePeriod=30 Jan 29 09:28:32 crc kubenswrapper[4771]: I0129 09:28:32.500421 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-api" containerID="cri-o://75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f" gracePeriod=30 Jan 29 09:28:33 crc kubenswrapper[4771]: I0129 09:28:33.126322 4771 generic.go:334] "Generic (PLEG): container finished" podID="b25b1322-303c-4e94-b49d-d0902b615905" containerID="98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507" exitCode=143 Jan 29 09:28:33 crc kubenswrapper[4771]: I0129 09:28:33.126370 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b25b1322-303c-4e94-b49d-d0902b615905","Type":"ContainerDied","Data":"98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507"} Jan 29 09:28:33 crc kubenswrapper[4771]: I0129 09:28:33.130293 4771 generic.go:334] "Generic (PLEG): container finished" podID="b04c3ea3-b31d-48c1-a088-17356009774e" containerID="e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341" exitCode=0 Jan 29 09:28:33 crc kubenswrapper[4771]: I0129 09:28:33.130317 4771 generic.go:334] "Generic (PLEG): container finished" podID="b04c3ea3-b31d-48c1-a088-17356009774e" containerID="93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2" exitCode=2 Jan 29 09:28:33 crc kubenswrapper[4771]: I0129 09:28:33.130325 4771 generic.go:334] "Generic (PLEG): container finished" podID="b04c3ea3-b31d-48c1-a088-17356009774e" containerID="571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce" exitCode=0 Jan 29 09:28:33 crc kubenswrapper[4771]: I0129 09:28:33.130401 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerDied","Data":"e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341"} Jan 29 09:28:33 crc kubenswrapper[4771]: I0129 09:28:33.130439 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerDied","Data":"93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2"} Jan 29 09:28:33 crc kubenswrapper[4771]: I0129 09:28:33.130453 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerDied","Data":"571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce"} Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.135859 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.160571 4771 generic.go:334] "Generic (PLEG): container finished" podID="b25b1322-303c-4e94-b49d-d0902b615905" containerID="75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f" exitCode=0 Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.160625 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b25b1322-303c-4e94-b49d-d0902b615905","Type":"ContainerDied","Data":"75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f"} Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.160660 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b25b1322-303c-4e94-b49d-d0902b615905","Type":"ContainerDied","Data":"a04cbd6a295f1d7411ac71f3db4c45dae27e2c6a2ea52540e9b89eb5902b5ad0"} Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.160675 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.160683 4771 scope.go:117] "RemoveContainer" containerID="75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.214429 4771 scope.go:117] "RemoveContainer" containerID="98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.289507 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b25b1322-303c-4e94-b49d-d0902b615905-logs\") pod \"b25b1322-303c-4e94-b49d-d0902b615905\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.289910 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-config-data\") pod \"b25b1322-303c-4e94-b49d-d0902b615905\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.290062 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqw9r\" (UniqueName: \"kubernetes.io/projected/b25b1322-303c-4e94-b49d-d0902b615905-kube-api-access-xqw9r\") pod \"b25b1322-303c-4e94-b49d-d0902b615905\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.290196 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-combined-ca-bundle\") pod \"b25b1322-303c-4e94-b49d-d0902b615905\" (UID: \"b25b1322-303c-4e94-b49d-d0902b615905\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.290608 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b25b1322-303c-4e94-b49d-d0902b615905-logs" (OuterVolumeSpecName: "logs") pod "b25b1322-303c-4e94-b49d-d0902b615905" (UID: "b25b1322-303c-4e94-b49d-d0902b615905"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.292315 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b25b1322-303c-4e94-b49d-d0902b615905-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.301601 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25b1322-303c-4e94-b49d-d0902b615905-kube-api-access-xqw9r" (OuterVolumeSpecName: "kube-api-access-xqw9r") pod "b25b1322-303c-4e94-b49d-d0902b615905" (UID: "b25b1322-303c-4e94-b49d-d0902b615905"). InnerVolumeSpecName "kube-api-access-xqw9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.311411 4771 scope.go:117] "RemoveContainer" containerID="75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f" Jan 29 09:28:36 crc kubenswrapper[4771]: E0129 09:28:36.311942 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f\": container with ID starting with 75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f not found: ID does not exist" containerID="75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.312018 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f"} err="failed to get container status \"75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f\": rpc error: code = NotFound desc = could not find container \"75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f\": container with ID starting with 75d156b6ae81bf936b0dd06ca2b052bbfe7d7925e373e547d3dd2d9e2cf90c6f not found: ID does not exist" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.312071 4771 scope.go:117] "RemoveContainer" containerID="98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507" Jan 29 09:28:36 crc kubenswrapper[4771]: E0129 09:28:36.313307 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507\": container with ID starting with 98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507 not found: ID does not exist" containerID="98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.313357 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507"} err="failed to get container status \"98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507\": rpc error: code = NotFound desc = could not find container \"98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507\": container with ID starting with 98283e76a20740285cfbca01912ca5a41e580bc75f4ba21e3f53c0051bffa507 not found: ID does not exist" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.329841 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b25b1322-303c-4e94-b49d-d0902b615905" (UID: "b25b1322-303c-4e94-b49d-d0902b615905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.335612 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-config-data" (OuterVolumeSpecName: "config-data") pod "b25b1322-303c-4e94-b49d-d0902b615905" (UID: "b25b1322-303c-4e94-b49d-d0902b615905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.394506 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.394547 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b25b1322-303c-4e94-b49d-d0902b615905-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.394558 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqw9r\" (UniqueName: \"kubernetes.io/projected/b25b1322-303c-4e94-b49d-d0902b615905-kube-api-access-xqw9r\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.508805 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.518111 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.536740 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.552784 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:36 crc kubenswrapper[4771]: E0129 09:28:36.553428 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-log" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.553451 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-log" Jan 29 09:28:36 crc kubenswrapper[4771]: E0129 09:28:36.553510 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-api" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.553521 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-api" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.553757 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-log" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.553774 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b25b1322-303c-4e94-b49d-d0902b615905" containerName="nova-api-api" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.555300 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.559317 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.567173 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.568880 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.569107 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.578190 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.606632 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.700826 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790c53d1-1651-4109-a99b-f90ddad4cdc2-logs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.700898 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-config-data\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.700945 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqpw\" (UniqueName: \"kubernetes.io/projected/790c53d1-1651-4109-a99b-f90ddad4cdc2-kube-api-access-phqpw\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.700978 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-public-tls-certs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.701033 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.701470 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.803465 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-sg-core-conf-yaml\") pod \"b04c3ea3-b31d-48c1-a088-17356009774e\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804210 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-scripts\") pod \"b04c3ea3-b31d-48c1-a088-17356009774e\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804288 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-log-httpd\") pod \"b04c3ea3-b31d-48c1-a088-17356009774e\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804305 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-run-httpd\") pod \"b04c3ea3-b31d-48c1-a088-17356009774e\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804448 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-config-data\") pod \"b04c3ea3-b31d-48c1-a088-17356009774e\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804494 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-ceilometer-tls-certs\") pod \"b04c3ea3-b31d-48c1-a088-17356009774e\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804532 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-combined-ca-bundle\") pod \"b04c3ea3-b31d-48c1-a088-17356009774e\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804552 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg87l\" (UniqueName: \"kubernetes.io/projected/b04c3ea3-b31d-48c1-a088-17356009774e-kube-api-access-hg87l\") pod \"b04c3ea3-b31d-48c1-a088-17356009774e\" (UID: \"b04c3ea3-b31d-48c1-a088-17356009774e\") " Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790c53d1-1651-4109-a99b-f90ddad4cdc2-logs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804926 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-config-data\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.804976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phqpw\" (UniqueName: \"kubernetes.io/projected/790c53d1-1651-4109-a99b-f90ddad4cdc2-kube-api-access-phqpw\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.805021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-public-tls-certs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.805057 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.805204 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.805338 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b04c3ea3-b31d-48c1-a088-17356009774e" (UID: "b04c3ea3-b31d-48c1-a088-17356009774e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.808559 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790c53d1-1651-4109-a99b-f90ddad4cdc2-logs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.809833 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b04c3ea3-b31d-48c1-a088-17356009774e" (UID: "b04c3ea3-b31d-48c1-a088-17356009774e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.810485 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04c3ea3-b31d-48c1-a088-17356009774e-kube-api-access-hg87l" (OuterVolumeSpecName: "kube-api-access-hg87l") pod "b04c3ea3-b31d-48c1-a088-17356009774e" (UID: "b04c3ea3-b31d-48c1-a088-17356009774e"). InnerVolumeSpecName "kube-api-access-hg87l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.810949 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-scripts" (OuterVolumeSpecName: "scripts") pod "b04c3ea3-b31d-48c1-a088-17356009774e" (UID: "b04c3ea3-b31d-48c1-a088-17356009774e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.812208 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.812342 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-config-data\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.812372 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-public-tls-certs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.813952 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.826961 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phqpw\" (UniqueName: \"kubernetes.io/projected/790c53d1-1651-4109-a99b-f90ddad4cdc2-kube-api-access-phqpw\") pod \"nova-api-0\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.838583 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b04c3ea3-b31d-48c1-a088-17356009774e" (UID: "b04c3ea3-b31d-48c1-a088-17356009774e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.859490 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b25b1322-303c-4e94-b49d-d0902b615905" path="/var/lib/kubelet/pods/b25b1322-303c-4e94-b49d-d0902b615905/volumes" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.878738 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b04c3ea3-b31d-48c1-a088-17356009774e" (UID: "b04c3ea3-b31d-48c1-a088-17356009774e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.907678 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.907863 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg87l\" (UniqueName: \"kubernetes.io/projected/b04c3ea3-b31d-48c1-a088-17356009774e-kube-api-access-hg87l\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.907965 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.908041 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.908098 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.908151 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04c3ea3-b31d-48c1-a088-17356009774e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.910364 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.912508 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b04c3ea3-b31d-48c1-a088-17356009774e" (UID: "b04c3ea3-b31d-48c1-a088-17356009774e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:36 crc kubenswrapper[4771]: I0129 09:28:36.947107 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-config-data" (OuterVolumeSpecName: "config-data") pod "b04c3ea3-b31d-48c1-a088-17356009774e" (UID: "b04c3ea3-b31d-48c1-a088-17356009774e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.010502 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.010708 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04c3ea3-b31d-48c1-a088-17356009774e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.307846 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerDied","Data":"89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12"} Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.307929 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.307950 4771 scope.go:117] "RemoveContainer" containerID="e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.308219 4771 generic.go:334] "Generic (PLEG): container finished" podID="b04c3ea3-b31d-48c1-a088-17356009774e" containerID="89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12" exitCode=0 Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.308355 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04c3ea3-b31d-48c1-a088-17356009774e","Type":"ContainerDied","Data":"b376361fe041f3703a5a01fe88b5f88298705a643b809f5f9dfc0db5d5c67953"} Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.357841 4771 scope.go:117] "RemoveContainer" containerID="93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.368050 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.371357 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.381991 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.401861 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:37 crc kubenswrapper[4771]: E0129 09:28:37.402341 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="proxy-httpd" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.402361 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="proxy-httpd" Jan 29 09:28:37 crc kubenswrapper[4771]: E0129 09:28:37.402373 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="sg-core" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.402382 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="sg-core" Jan 29 09:28:37 crc kubenswrapper[4771]: E0129 09:28:37.402399 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="ceilometer-central-agent" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.402408 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="ceilometer-central-agent" Jan 29 09:28:37 crc kubenswrapper[4771]: E0129 09:28:37.402438 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="ceilometer-notification-agent" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.402445 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="ceilometer-notification-agent" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.402644 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="ceilometer-central-agent" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.402662 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="sg-core" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.402682 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="proxy-httpd" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.402849 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" containerName="ceilometer-notification-agent" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.413613 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.415679 4771 scope.go:117] "RemoveContainer" containerID="89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.430791 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.431458 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.431829 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.434992 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.462935 4771 scope.go:117] "RemoveContainer" containerID="571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.478290 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.498921 4771 scope.go:117] "RemoveContainer" containerID="e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341" Jan 29 09:28:37 crc kubenswrapper[4771]: E0129 09:28:37.499900 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341\": container with ID starting with e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341 not found: ID does not exist" containerID="e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.499938 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341"} err="failed to get container status \"e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341\": rpc error: code = NotFound desc = could not find container \"e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341\": container with ID starting with e1c35c7fd02240368fc349400fcd00a94bd63ea477e27ff833de8ecfa315e341 not found: ID does not exist" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.499963 4771 scope.go:117] "RemoveContainer" containerID="93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2" Jan 29 09:28:37 crc kubenswrapper[4771]: E0129 09:28:37.500567 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2\": container with ID starting with 93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2 not found: ID does not exist" containerID="93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.500618 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2"} err="failed to get container status \"93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2\": rpc error: code = NotFound desc = could not find container \"93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2\": container with ID starting with 93a486ba23f81f7a250f055597b749c0668e808768958d29d848956b38c8b1a2 not found: ID does not exist" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.500644 4771 scope.go:117] "RemoveContainer" containerID="89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12" Jan 29 09:28:37 crc kubenswrapper[4771]: E0129 09:28:37.500933 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12\": container with ID starting with 89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12 not found: ID does not exist" containerID="89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.500961 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12"} err="failed to get container status \"89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12\": rpc error: code = NotFound desc = could not find container \"89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12\": container with ID starting with 89d759b075dfce78e598eab4e52c63340e17012e0b4d0eccc06202e97e13ba12 not found: ID does not exist" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.500980 4771 scope.go:117] "RemoveContainer" containerID="571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce" Jan 29 09:28:37 crc kubenswrapper[4771]: E0129 09:28:37.501260 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce\": container with ID starting with 571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce not found: ID does not exist" containerID="571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.501285 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce"} err="failed to get container status \"571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce\": rpc error: code = NotFound desc = could not find container \"571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce\": container with ID starting with 571733c2a10461260c0cf413b38108b0d4f19f1824773d329301097468fe24ce not found: ID does not exist" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.528076 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.528172 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-config-data\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.528260 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-scripts\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.528331 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.528357 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcpm\" (UniqueName: \"kubernetes.io/projected/1ea3117f-141f-46c2-bee3-71a88181068c-kube-api-access-cbcpm\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.528381 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.528405 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea3117f-141f-46c2-bee3-71a88181068c-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.528439 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea3117f-141f-46c2-bee3-71a88181068c-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.555870 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tnw9v"] Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.557489 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.561004 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.561269 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.564680 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tnw9v"] Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.630528 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.630617 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-config-data\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.630675 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-scripts\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.630744 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.630763 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcpm\" (UniqueName: \"kubernetes.io/projected/1ea3117f-141f-46c2-bee3-71a88181068c-kube-api-access-cbcpm\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.630801 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.630821 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea3117f-141f-46c2-bee3-71a88181068c-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.630849 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea3117f-141f-46c2-bee3-71a88181068c-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.631488 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea3117f-141f-46c2-bee3-71a88181068c-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.634639 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea3117f-141f-46c2-bee3-71a88181068c-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.637281 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-config-data\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.637494 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.637793 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.640435 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-scripts\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.651015 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcpm\" (UniqueName: \"kubernetes.io/projected/1ea3117f-141f-46c2-bee3-71a88181068c-kube-api-access-cbcpm\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.652864 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea3117f-141f-46c2-bee3-71a88181068c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea3117f-141f-46c2-bee3-71a88181068c\") " pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.735824 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-scripts\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.739391 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-config-data\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.739537 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th9jd\" (UniqueName: \"kubernetes.io/projected/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-kube-api-access-th9jd\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.739634 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.757399 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.841225 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-config-data\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.841302 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th9jd\" (UniqueName: \"kubernetes.io/projected/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-kube-api-access-th9jd\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.841333 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.841384 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-scripts\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.846844 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.860361 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-config-data\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.864870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th9jd\" (UniqueName: \"kubernetes.io/projected/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-kube-api-access-th9jd\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.871274 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-scripts\") pod \"nova-cell1-cell-mapping-tnw9v\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:37 crc kubenswrapper[4771]: I0129 09:28:37.919669 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:38 crc kubenswrapper[4771]: I0129 09:28:38.263650 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:28:38 crc kubenswrapper[4771]: I0129 09:28:38.321195 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea3117f-141f-46c2-bee3-71a88181068c","Type":"ContainerStarted","Data":"994390ba62e2f86fae0919f86109691bc5d71b81d16340aeefe2df0e4a3c36af"} Jan 29 09:28:38 crc kubenswrapper[4771]: I0129 09:28:38.324282 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"790c53d1-1651-4109-a99b-f90ddad4cdc2","Type":"ContainerStarted","Data":"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276"} Jan 29 09:28:38 crc kubenswrapper[4771]: I0129 09:28:38.324371 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"790c53d1-1651-4109-a99b-f90ddad4cdc2","Type":"ContainerStarted","Data":"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90"} Jan 29 09:28:38 crc kubenswrapper[4771]: I0129 09:28:38.324386 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"790c53d1-1651-4109-a99b-f90ddad4cdc2","Type":"ContainerStarted","Data":"9aed0932ae16681713721033e522b09c91a1dfc2983b0b7dfaec33ecb80a4e2a"} Jan 29 09:28:38 crc kubenswrapper[4771]: I0129 09:28:38.355937 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.355912325 podStartE2EDuration="2.355912325s" podCreationTimestamp="2026-01-29 09:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:38.350272231 +0000 UTC m=+1338.473112468" watchObservedRunningTime="2026-01-29 09:28:38.355912325 +0000 UTC m=+1338.478752552" Jan 29 09:28:38 crc kubenswrapper[4771]: I0129 09:28:38.440441 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tnw9v"] Jan 29 09:28:38 crc kubenswrapper[4771]: W0129 09:28:38.449301 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc41997c_ddd8_46fd_8c3f_b7bddcf11b59.slice/crio-ac5910484bcf4fdc69799be4ba0d3685db8fdff5f403c307a931b601fd30055d WatchSource:0}: Error finding container ac5910484bcf4fdc69799be4ba0d3685db8fdff5f403c307a931b601fd30055d: Status 404 returned error can't find the container with id ac5910484bcf4fdc69799be4ba0d3685db8fdff5f403c307a931b601fd30055d Jan 29 09:28:38 crc kubenswrapper[4771]: I0129 09:28:38.855705 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b04c3ea3-b31d-48c1-a088-17356009774e" path="/var/lib/kubelet/pods/b04c3ea3-b31d-48c1-a088-17356009774e/volumes" Jan 29 09:28:39 crc kubenswrapper[4771]: I0129 09:28:39.338119 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea3117f-141f-46c2-bee3-71a88181068c","Type":"ContainerStarted","Data":"a9c3a7dac56bcc9c9a4d60fc9aea95f071a793b2ef06baf89b531a4a8e66b366"} Jan 29 09:28:39 crc kubenswrapper[4771]: I0129 09:28:39.341662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tnw9v" event={"ID":"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59","Type":"ContainerStarted","Data":"dac7d9256e375fb5c36a33aff78fdf37b75e1c9de084d7fa22ab4bb43cf6d167"} Jan 29 09:28:39 crc kubenswrapper[4771]: I0129 09:28:39.341687 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tnw9v" event={"ID":"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59","Type":"ContainerStarted","Data":"ac5910484bcf4fdc69799be4ba0d3685db8fdff5f403c307a931b601fd30055d"} Jan 29 09:28:39 crc kubenswrapper[4771]: I0129 09:28:39.361518 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tnw9v" podStartSLOduration=2.361477752 podStartE2EDuration="2.361477752s" podCreationTimestamp="2026-01-29 09:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:39.356766943 +0000 UTC m=+1339.479607200" watchObservedRunningTime="2026-01-29 09:28:39.361477752 +0000 UTC m=+1339.484317989" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.064015 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.143524 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4d8f7df9-nzhdn"] Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.144299 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" podUID="98006db9-8ac9-4fc6-b552-8fc014985454" containerName="dnsmasq-dns" containerID="cri-o://6ba753077ad45f25fa52743cca5f02aa434c4b1d23c1b7f71c76868b8ca3bebb" gracePeriod=10 Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.357078 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea3117f-141f-46c2-bee3-71a88181068c","Type":"ContainerStarted","Data":"eec79bb76140a238468f832bb4aee5dc5e06163012e5738382923598e50284f7"} Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.359188 4771 generic.go:334] "Generic (PLEG): container finished" podID="98006db9-8ac9-4fc6-b552-8fc014985454" containerID="6ba753077ad45f25fa52743cca5f02aa434c4b1d23c1b7f71c76868b8ca3bebb" exitCode=0 Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.359450 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" event={"ID":"98006db9-8ac9-4fc6-b552-8fc014985454","Type":"ContainerDied","Data":"6ba753077ad45f25fa52743cca5f02aa434c4b1d23c1b7f71c76868b8ca3bebb"} Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.683088 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.826340 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-nb\") pod \"98006db9-8ac9-4fc6-b552-8fc014985454\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.826432 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-sb\") pod \"98006db9-8ac9-4fc6-b552-8fc014985454\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.826463 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-svc\") pod \"98006db9-8ac9-4fc6-b552-8fc014985454\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.826495 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-config\") pod \"98006db9-8ac9-4fc6-b552-8fc014985454\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.826674 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-swift-storage-0\") pod \"98006db9-8ac9-4fc6-b552-8fc014985454\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.826787 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zvwd\" (UniqueName: \"kubernetes.io/projected/98006db9-8ac9-4fc6-b552-8fc014985454-kube-api-access-2zvwd\") pod \"98006db9-8ac9-4fc6-b552-8fc014985454\" (UID: \"98006db9-8ac9-4fc6-b552-8fc014985454\") " Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.831966 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98006db9-8ac9-4fc6-b552-8fc014985454-kube-api-access-2zvwd" (OuterVolumeSpecName: "kube-api-access-2zvwd") pod "98006db9-8ac9-4fc6-b552-8fc014985454" (UID: "98006db9-8ac9-4fc6-b552-8fc014985454"). InnerVolumeSpecName "kube-api-access-2zvwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.912739 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98006db9-8ac9-4fc6-b552-8fc014985454" (UID: "98006db9-8ac9-4fc6-b552-8fc014985454"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.916484 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98006db9-8ac9-4fc6-b552-8fc014985454" (UID: "98006db9-8ac9-4fc6-b552-8fc014985454"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.922371 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98006db9-8ac9-4fc6-b552-8fc014985454" (UID: "98006db9-8ac9-4fc6-b552-8fc014985454"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.925158 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-config" (OuterVolumeSpecName: "config") pod "98006db9-8ac9-4fc6-b552-8fc014985454" (UID: "98006db9-8ac9-4fc6-b552-8fc014985454"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.927130 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98006db9-8ac9-4fc6-b552-8fc014985454" (UID: "98006db9-8ac9-4fc6-b552-8fc014985454"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.931495 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.931625 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zvwd\" (UniqueName: \"kubernetes.io/projected/98006db9-8ac9-4fc6-b552-8fc014985454-kube-api-access-2zvwd\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.931720 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.931796 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.931885 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:40 crc kubenswrapper[4771]: I0129 09:28:40.931949 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98006db9-8ac9-4fc6-b552-8fc014985454-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:41 crc kubenswrapper[4771]: I0129 09:28:41.373138 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea3117f-141f-46c2-bee3-71a88181068c","Type":"ContainerStarted","Data":"c2600b5e6eb952229f5c3582e98219e7a37257f425117742f2c6cfff72275683"} Jan 29 09:28:41 crc kubenswrapper[4771]: I0129 09:28:41.375711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" event={"ID":"98006db9-8ac9-4fc6-b552-8fc014985454","Type":"ContainerDied","Data":"d6db7ad1a1ee7a50765feab01023f67ef40117bf4c867a174bbd24501efc99b0"} Jan 29 09:28:41 crc kubenswrapper[4771]: I0129 09:28:41.375836 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4d8f7df9-nzhdn" Jan 29 09:28:41 crc kubenswrapper[4771]: I0129 09:28:41.375885 4771 scope.go:117] "RemoveContainer" containerID="6ba753077ad45f25fa52743cca5f02aa434c4b1d23c1b7f71c76868b8ca3bebb" Jan 29 09:28:41 crc kubenswrapper[4771]: I0129 09:28:41.403843 4771 scope.go:117] "RemoveContainer" containerID="b3e8dc3d10842aab2e728f6eb25be6d42936ffa4b4fd9a07e300dcf9c3bdfc01" Jan 29 09:28:41 crc kubenswrapper[4771]: I0129 09:28:41.434076 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4d8f7df9-nzhdn"] Jan 29 09:28:41 crc kubenswrapper[4771]: I0129 09:28:41.445332 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f4d8f7df9-nzhdn"] Jan 29 09:28:42 crc kubenswrapper[4771]: I0129 09:28:42.019414 4771 scope.go:117] "RemoveContainer" containerID="551fab5ba086e0561a53c449e0ca5f0fec7ac809197a02924ba3a662663aac9f" Jan 29 09:28:42 crc kubenswrapper[4771]: I0129 09:28:42.049427 4771 scope.go:117] "RemoveContainer" containerID="6f404b11b3a007f9bb56a7fba156b99c3fe8842fd49cb962783fadf0f8c45b91" Jan 29 09:28:42 crc kubenswrapper[4771]: I0129 09:28:42.078189 4771 scope.go:117] "RemoveContainer" containerID="ffc6554286bb8a1e3dce86ff37d49894b626a69167f6d823f07631fac2372cbd" Jan 29 09:28:42 crc kubenswrapper[4771]: I0129 09:28:42.121535 4771 scope.go:117] "RemoveContainer" containerID="b7688842b3334da9fabe727f76de98ed50aaf9a01756be1c8676e85142e964ab" Jan 29 09:28:42 crc kubenswrapper[4771]: I0129 09:28:42.159039 4771 scope.go:117] "RemoveContainer" containerID="b9633a5c7d12eb5022da9d085264415d80c943836f18808b1c1eb42cbd6a91d1" Jan 29 09:28:42 crc kubenswrapper[4771]: I0129 09:28:42.228820 4771 scope.go:117] "RemoveContainer" containerID="ad59e59162be2d762825ab64901e5ae97dedf258fa1d7d4006ca24837ddad4aa" Jan 29 09:28:42 crc kubenswrapper[4771]: I0129 09:28:42.860637 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98006db9-8ac9-4fc6-b552-8fc014985454" path="/var/lib/kubelet/pods/98006db9-8ac9-4fc6-b552-8fc014985454/volumes" Jan 29 09:28:43 crc kubenswrapper[4771]: I0129 09:28:43.420092 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea3117f-141f-46c2-bee3-71a88181068c","Type":"ContainerStarted","Data":"c2bfe70b76fd7ba336a777569ffb5a696be8a28b661d2307aeb8161e480c5f1c"} Jan 29 09:28:43 crc kubenswrapper[4771]: I0129 09:28:43.420749 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:28:43 crc kubenswrapper[4771]: I0129 09:28:43.444799 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.877305373 podStartE2EDuration="6.444770717s" podCreationTimestamp="2026-01-29 09:28:37 +0000 UTC" firstStartedPulling="2026-01-29 09:28:38.275170079 +0000 UTC m=+1338.398010306" lastFinishedPulling="2026-01-29 09:28:42.842635423 +0000 UTC m=+1342.965475650" observedRunningTime="2026-01-29 09:28:43.440037957 +0000 UTC m=+1343.562878214" watchObservedRunningTime="2026-01-29 09:28:43.444770717 +0000 UTC m=+1343.567610944" Jan 29 09:28:44 crc kubenswrapper[4771]: I0129 09:28:44.433950 4771 generic.go:334] "Generic (PLEG): container finished" podID="bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" containerID="dac7d9256e375fb5c36a33aff78fdf37b75e1c9de084d7fa22ab4bb43cf6d167" exitCode=0 Jan 29 09:28:44 crc kubenswrapper[4771]: I0129 09:28:44.434089 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tnw9v" event={"ID":"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59","Type":"ContainerDied","Data":"dac7d9256e375fb5c36a33aff78fdf37b75e1c9de084d7fa22ab4bb43cf6d167"} Jan 29 09:28:45 crc kubenswrapper[4771]: I0129 09:28:45.961336 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.051933 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-config-data\") pod \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.052007 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-scripts\") pod \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.052052 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-combined-ca-bundle\") pod \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.052237 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th9jd\" (UniqueName: \"kubernetes.io/projected/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-kube-api-access-th9jd\") pod \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\" (UID: \"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59\") " Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.057325 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-kube-api-access-th9jd" (OuterVolumeSpecName: "kube-api-access-th9jd") pod "bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" (UID: "bc41997c-ddd8-46fd-8c3f-b7bddcf11b59"). InnerVolumeSpecName "kube-api-access-th9jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.057574 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-scripts" (OuterVolumeSpecName: "scripts") pod "bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" (UID: "bc41997c-ddd8-46fd-8c3f-b7bddcf11b59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.084277 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" (UID: "bc41997c-ddd8-46fd-8c3f-b7bddcf11b59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.100478 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-config-data" (OuterVolumeSpecName: "config-data") pod "bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" (UID: "bc41997c-ddd8-46fd-8c3f-b7bddcf11b59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.156024 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.156388 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.156402 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.156420 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th9jd\" (UniqueName: \"kubernetes.io/projected/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59-kube-api-access-th9jd\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.455339 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tnw9v" event={"ID":"bc41997c-ddd8-46fd-8c3f-b7bddcf11b59","Type":"ContainerDied","Data":"ac5910484bcf4fdc69799be4ba0d3685db8fdff5f403c307a931b601fd30055d"} Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.455387 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5910484bcf4fdc69799be4ba0d3685db8fdff5f403c307a931b601fd30055d" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.455459 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tnw9v" Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.649688 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.650970 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="47eb9e90-ff3d-48ff-aa25-9003cb52fc58" containerName="nova-scheduler-scheduler" containerID="cri-o://16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7" gracePeriod=30 Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.662464 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.662721 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerName="nova-api-log" containerID="cri-o://dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90" gracePeriod=30 Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.662870 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerName="nova-api-api" containerID="cri-o://68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276" gracePeriod=30 Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.729175 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.729495 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-log" containerID="cri-o://1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b" gracePeriod=30 Jan 29 09:28:46 crc kubenswrapper[4771]: I0129 09:28:46.729601 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-metadata" containerID="cri-o://83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34" gracePeriod=30 Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.302141 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.408591 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-internal-tls-certs\") pod \"790c53d1-1651-4109-a99b-f90ddad4cdc2\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.408854 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-config-data\") pod \"790c53d1-1651-4109-a99b-f90ddad4cdc2\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.408976 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-public-tls-certs\") pod \"790c53d1-1651-4109-a99b-f90ddad4cdc2\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.409134 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phqpw\" (UniqueName: \"kubernetes.io/projected/790c53d1-1651-4109-a99b-f90ddad4cdc2-kube-api-access-phqpw\") pod \"790c53d1-1651-4109-a99b-f90ddad4cdc2\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.409182 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-combined-ca-bundle\") pod \"790c53d1-1651-4109-a99b-f90ddad4cdc2\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.409247 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790c53d1-1651-4109-a99b-f90ddad4cdc2-logs\") pod \"790c53d1-1651-4109-a99b-f90ddad4cdc2\" (UID: \"790c53d1-1651-4109-a99b-f90ddad4cdc2\") " Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.410135 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790c53d1-1651-4109-a99b-f90ddad4cdc2-logs" (OuterVolumeSpecName: "logs") pod "790c53d1-1651-4109-a99b-f90ddad4cdc2" (UID: "790c53d1-1651-4109-a99b-f90ddad4cdc2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.417193 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790c53d1-1651-4109-a99b-f90ddad4cdc2-kube-api-access-phqpw" (OuterVolumeSpecName: "kube-api-access-phqpw") pod "790c53d1-1651-4109-a99b-f90ddad4cdc2" (UID: "790c53d1-1651-4109-a99b-f90ddad4cdc2"). InnerVolumeSpecName "kube-api-access-phqpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.455465 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-config-data" (OuterVolumeSpecName: "config-data") pod "790c53d1-1651-4109-a99b-f90ddad4cdc2" (UID: "790c53d1-1651-4109-a99b-f90ddad4cdc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.462120 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "790c53d1-1651-4109-a99b-f90ddad4cdc2" (UID: "790c53d1-1651-4109-a99b-f90ddad4cdc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.481404 4771 generic.go:334] "Generic (PLEG): container finished" podID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerID="68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276" exitCode=0 Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.481518 4771 generic.go:334] "Generic (PLEG): container finished" podID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerID="dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90" exitCode=143 Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.481619 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"790c53d1-1651-4109-a99b-f90ddad4cdc2","Type":"ContainerDied","Data":"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276"} Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.481722 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"790c53d1-1651-4109-a99b-f90ddad4cdc2","Type":"ContainerDied","Data":"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90"} Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.481761 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"790c53d1-1651-4109-a99b-f90ddad4cdc2","Type":"ContainerDied","Data":"9aed0932ae16681713721033e522b09c91a1dfc2983b0b7dfaec33ecb80a4e2a"} Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.481831 4771 scope.go:117] "RemoveContainer" containerID="68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.482025 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.488787 4771 generic.go:334] "Generic (PLEG): container finished" podID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerID="1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b" exitCode=143 Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.488884 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0","Type":"ContainerDied","Data":"1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b"} Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.494680 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "790c53d1-1651-4109-a99b-f90ddad4cdc2" (UID: "790c53d1-1651-4109-a99b-f90ddad4cdc2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.509856 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "790c53d1-1651-4109-a99b-f90ddad4cdc2" (UID: "790c53d1-1651-4109-a99b-f90ddad4cdc2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.511390 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phqpw\" (UniqueName: \"kubernetes.io/projected/790c53d1-1651-4109-a99b-f90ddad4cdc2-kube-api-access-phqpw\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.511428 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.511441 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790c53d1-1651-4109-a99b-f90ddad4cdc2-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.511453 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.511464 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.511475 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/790c53d1-1651-4109-a99b-f90ddad4cdc2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.570787 4771 scope.go:117] "RemoveContainer" containerID="dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.597114 4771 scope.go:117] "RemoveContainer" containerID="68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276" Jan 29 09:28:47 crc kubenswrapper[4771]: E0129 09:28:47.597620 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276\": container with ID starting with 68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276 not found: ID does not exist" containerID="68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.597664 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276"} err="failed to get container status \"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276\": rpc error: code = NotFound desc = could not find container \"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276\": container with ID starting with 68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276 not found: ID does not exist" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.597707 4771 scope.go:117] "RemoveContainer" containerID="dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90" Jan 29 09:28:47 crc kubenswrapper[4771]: E0129 09:28:47.598303 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90\": container with ID starting with dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90 not found: ID does not exist" containerID="dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.598340 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90"} err="failed to get container status \"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90\": rpc error: code = NotFound desc = could not find container \"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90\": container with ID starting with dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90 not found: ID does not exist" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.600469 4771 scope.go:117] "RemoveContainer" containerID="68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.600966 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276"} err="failed to get container status \"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276\": rpc error: code = NotFound desc = could not find container \"68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276\": container with ID starting with 68c60213901fe948d78c43698b39c1b58d91b5070efc4b14265785920dd09276 not found: ID does not exist" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.600993 4771 scope.go:117] "RemoveContainer" containerID="dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.603329 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90"} err="failed to get container status \"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90\": rpc error: code = NotFound desc = could not find container \"dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90\": container with ID starting with dd802f6053dff0e4867884dd2f195dc50470a7fa4d20c7773d57902aa3701e90 not found: ID does not exist" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.813302 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.821354 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.904297 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:47 crc kubenswrapper[4771]: E0129 09:28:47.905385 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerName="nova-api-api" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905402 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerName="nova-api-api" Jan 29 09:28:47 crc kubenswrapper[4771]: E0129 09:28:47.905421 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98006db9-8ac9-4fc6-b552-8fc014985454" containerName="dnsmasq-dns" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905431 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="98006db9-8ac9-4fc6-b552-8fc014985454" containerName="dnsmasq-dns" Jan 29 09:28:47 crc kubenswrapper[4771]: E0129 09:28:47.905466 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerName="nova-api-log" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905473 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerName="nova-api-log" Jan 29 09:28:47 crc kubenswrapper[4771]: E0129 09:28:47.905510 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98006db9-8ac9-4fc6-b552-8fc014985454" containerName="init" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905519 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="98006db9-8ac9-4fc6-b552-8fc014985454" containerName="init" Jan 29 09:28:47 crc kubenswrapper[4771]: E0129 09:28:47.905546 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" containerName="nova-manage" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905552 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" containerName="nova-manage" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905901 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" containerName="nova-manage" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905931 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerName="nova-api-api" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905950 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" containerName="nova-api-log" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.905966 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="98006db9-8ac9-4fc6-b552-8fc014985454" containerName="dnsmasq-dns" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.923979 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.930240 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.930406 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.930815 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 09:28:47 crc kubenswrapper[4771]: I0129 09:28:47.933094 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.038227 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-config-data\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.038358 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.038399 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfr2\" (UniqueName: \"kubernetes.io/projected/460724f7-49b9-475d-a983-5ffa6998815d-kube-api-access-kqfr2\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.038430 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.038462 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-public-tls-certs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.038490 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460724f7-49b9-475d-a983-5ffa6998815d-logs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.121126 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.140578 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.140662 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-public-tls-certs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.140719 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460724f7-49b9-475d-a983-5ffa6998815d-logs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.140773 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-config-data\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.140859 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.140903 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfr2\" (UniqueName: \"kubernetes.io/projected/460724f7-49b9-475d-a983-5ffa6998815d-kube-api-access-kqfr2\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.143313 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460724f7-49b9-475d-a983-5ffa6998815d-logs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.152393 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.153361 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-config-data\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.157532 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-public-tls-certs\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.183685 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460724f7-49b9-475d-a983-5ffa6998815d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.224064 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfr2\" (UniqueName: \"kubernetes.io/projected/460724f7-49b9-475d-a983-5ffa6998815d-kube-api-access-kqfr2\") pod \"nova-api-0\" (UID: \"460724f7-49b9-475d-a983-5ffa6998815d\") " pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.244499 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-combined-ca-bundle\") pod \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.244588 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-config-data\") pod \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.244810 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdp9t\" (UniqueName: \"kubernetes.io/projected/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-kube-api-access-wdp9t\") pod \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\" (UID: \"47eb9e90-ff3d-48ff-aa25-9003cb52fc58\") " Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.256113 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-kube-api-access-wdp9t" (OuterVolumeSpecName: "kube-api-access-wdp9t") pod "47eb9e90-ff3d-48ff-aa25-9003cb52fc58" (UID: "47eb9e90-ff3d-48ff-aa25-9003cb52fc58"). InnerVolumeSpecName "kube-api-access-wdp9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.271507 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.286588 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-config-data" (OuterVolumeSpecName: "config-data") pod "47eb9e90-ff3d-48ff-aa25-9003cb52fc58" (UID: "47eb9e90-ff3d-48ff-aa25-9003cb52fc58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.298527 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47eb9e90-ff3d-48ff-aa25-9003cb52fc58" (UID: "47eb9e90-ff3d-48ff-aa25-9003cb52fc58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.347489 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdp9t\" (UniqueName: \"kubernetes.io/projected/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-kube-api-access-wdp9t\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.348392 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.348486 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eb9e90-ff3d-48ff-aa25-9003cb52fc58-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.506386 4771 generic.go:334] "Generic (PLEG): container finished" podID="47eb9e90-ff3d-48ff-aa25-9003cb52fc58" containerID="16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7" exitCode=0 Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.506481 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.506508 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47eb9e90-ff3d-48ff-aa25-9003cb52fc58","Type":"ContainerDied","Data":"16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7"} Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.507084 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47eb9e90-ff3d-48ff-aa25-9003cb52fc58","Type":"ContainerDied","Data":"2262dca0b76bc1a5539a9bae6acfc6ea90572fd5085aa4883ee7fd8f064c5ade"} Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.507109 4771 scope.go:117] "RemoveContainer" containerID="16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.531192 4771 scope.go:117] "RemoveContainer" containerID="16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7" Jan 29 09:28:48 crc kubenswrapper[4771]: E0129 09:28:48.531675 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7\": container with ID starting with 16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7 not found: ID does not exist" containerID="16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.531728 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7"} err="failed to get container status \"16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7\": rpc error: code = NotFound desc = could not find container \"16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7\": container with ID starting with 16649d113c6ec1a3216f4c28ef2e45776e798a837055ad11d8e86d0b4866a5e7 not found: ID does not exist" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.565452 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.577600 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.589765 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:48 crc kubenswrapper[4771]: E0129 09:28:48.590154 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47eb9e90-ff3d-48ff-aa25-9003cb52fc58" containerName="nova-scheduler-scheduler" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.590169 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="47eb9e90-ff3d-48ff-aa25-9003cb52fc58" containerName="nova-scheduler-scheduler" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.590346 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="47eb9e90-ff3d-48ff-aa25-9003cb52fc58" containerName="nova-scheduler-scheduler" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.591130 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.593413 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.594655 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.655507 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p8hp\" (UniqueName: \"kubernetes.io/projected/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-kube-api-access-7p8hp\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.655985 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-config-data\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.656417 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.711090 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.759378 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.759443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p8hp\" (UniqueName: \"kubernetes.io/projected/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-kube-api-access-7p8hp\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.759474 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-config-data\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.766272 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-config-data\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.771425 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.780142 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p8hp\" (UniqueName: \"kubernetes.io/projected/d24a45bb-85f8-42c2-bee5-0b5407bdc52e-kube-api-access-7p8hp\") pod \"nova-scheduler-0\" (UID: \"d24a45bb-85f8-42c2-bee5-0b5407bdc52e\") " pod="openstack/nova-scheduler-0" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.859683 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47eb9e90-ff3d-48ff-aa25-9003cb52fc58" path="/var/lib/kubelet/pods/47eb9e90-ff3d-48ff-aa25-9003cb52fc58/volumes" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.860745 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790c53d1-1651-4109-a99b-f90ddad4cdc2" path="/var/lib/kubelet/pods/790c53d1-1651-4109-a99b-f90ddad4cdc2/volumes" Jan 29 09:28:48 crc kubenswrapper[4771]: I0129 09:28:48.914013 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:28:49 crc kubenswrapper[4771]: W0129 09:28:49.456347 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd24a45bb_85f8_42c2_bee5_0b5407bdc52e.slice/crio-686de15a49cd5991b3bd8d4e003568272a0431d66a7d3206628fcb930742b8d2 WatchSource:0}: Error finding container 686de15a49cd5991b3bd8d4e003568272a0431d66a7d3206628fcb930742b8d2: Status 404 returned error can't find the container with id 686de15a49cd5991b3bd8d4e003568272a0431d66a7d3206628fcb930742b8d2 Jan 29 09:28:49 crc kubenswrapper[4771]: I0129 09:28:49.460002 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:28:49 crc kubenswrapper[4771]: I0129 09:28:49.517503 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"460724f7-49b9-475d-a983-5ffa6998815d","Type":"ContainerStarted","Data":"9dddd684107e2cdc336db0acf87b737cca5b11a6cdfb8c4aca216a1ebc4a2113"} Jan 29 09:28:49 crc kubenswrapper[4771]: I0129 09:28:49.517560 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"460724f7-49b9-475d-a983-5ffa6998815d","Type":"ContainerStarted","Data":"f52cb567d3a3432816283d482739cfe2dc99940f62a799f1de264888a69d33ca"} Jan 29 09:28:49 crc kubenswrapper[4771]: I0129 09:28:49.517577 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"460724f7-49b9-475d-a983-5ffa6998815d","Type":"ContainerStarted","Data":"9f81c6072f03d218d2b4af1bc4d70043e288dc26aa3d684148456c537d70824c"} Jan 29 09:28:49 crc kubenswrapper[4771]: I0129 09:28:49.528483 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d24a45bb-85f8-42c2-bee5-0b5407bdc52e","Type":"ContainerStarted","Data":"686de15a49cd5991b3bd8d4e003568272a0431d66a7d3206628fcb930742b8d2"} Jan 29 09:28:49 crc kubenswrapper[4771]: I0129 09:28:49.545605 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.545581558 podStartE2EDuration="2.545581558s" podCreationTimestamp="2026-01-29 09:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:49.539727718 +0000 UTC m=+1349.662567945" watchObservedRunningTime="2026-01-29 09:28:49.545581558 +0000 UTC m=+1349.668421785" Jan 29 09:28:49 crc kubenswrapper[4771]: I0129 09:28:49.865091 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:59850->10.217.0.199:8775: read: connection reset by peer" Jan 29 09:28:49 crc kubenswrapper[4771]: I0129 09:28:49.865091 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:59860->10.217.0.199:8775: read: connection reset by peer" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.332940 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.392489 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-config-data\") pod \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.392547 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-nova-metadata-tls-certs\") pod \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.392726 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h6nb\" (UniqueName: \"kubernetes.io/projected/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-kube-api-access-8h6nb\") pod \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.392805 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-combined-ca-bundle\") pod \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.392836 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-logs\") pod \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\" (UID: \"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0\") " Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.393781 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-logs" (OuterVolumeSpecName: "logs") pod "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" (UID: "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.398497 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-kube-api-access-8h6nb" (OuterVolumeSpecName: "kube-api-access-8h6nb") pod "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" (UID: "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0"). InnerVolumeSpecName "kube-api-access-8h6nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.450923 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-config-data" (OuterVolumeSpecName: "config-data") pod "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" (UID: "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.464787 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" (UID: "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.465754 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" (UID: "b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.496268 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h6nb\" (UniqueName: \"kubernetes.io/projected/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-kube-api-access-8h6nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.496311 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.496322 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.496336 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.496347 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.542035 4771 generic.go:334] "Generic (PLEG): container finished" podID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerID="83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34" exitCode=0 Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.542107 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0","Type":"ContainerDied","Data":"83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34"} Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.542124 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.542136 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0","Type":"ContainerDied","Data":"5f9a1db6d904637198a87ec95446f02c3518b6968016d6b6cd3225f97c383664"} Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.542154 4771 scope.go:117] "RemoveContainer" containerID="83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.549745 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d24a45bb-85f8-42c2-bee5-0b5407bdc52e","Type":"ContainerStarted","Data":"347f3908f4deb29262c6401ac74d5e6b90d04d18c5e3b50a8a6bfe6d201c36cc"} Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.585472 4771 scope.go:117] "RemoveContainer" containerID="1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.595661 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.595629531 podStartE2EDuration="2.595629531s" podCreationTimestamp="2026-01-29 09:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:50.580467616 +0000 UTC m=+1350.703307853" watchObservedRunningTime="2026-01-29 09:28:50.595629531 +0000 UTC m=+1350.718469758" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.634022 4771 scope.go:117] "RemoveContainer" containerID="83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34" Jan 29 09:28:50 crc kubenswrapper[4771]: E0129 09:28:50.634949 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34\": container with ID starting with 83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34 not found: ID does not exist" containerID="83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.636370 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34"} err="failed to get container status \"83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34\": rpc error: code = NotFound desc = could not find container \"83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34\": container with ID starting with 83b78161028b5931b7e38cd62cde31c774252f71f2c155539a97eb08b541cd34 not found: ID does not exist" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.636497 4771 scope.go:117] "RemoveContainer" containerID="1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b" Jan 29 09:28:50 crc kubenswrapper[4771]: E0129 09:28:50.641902 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b\": container with ID starting with 1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b not found: ID does not exist" containerID="1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.641997 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b"} err="failed to get container status \"1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b\": rpc error: code = NotFound desc = could not find container \"1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b\": container with ID starting with 1a485eab5143fcde854eb33331aeac9be74e1bcac3d6e6d655b95a606145e67b not found: ID does not exist" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.658340 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.674832 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.683059 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:50 crc kubenswrapper[4771]: E0129 09:28:50.683636 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-metadata" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.683661 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-metadata" Jan 29 09:28:50 crc kubenswrapper[4771]: E0129 09:28:50.683709 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-log" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.683716 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-log" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.683935 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-log" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.683959 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" containerName="nova-metadata-metadata" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.685094 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.687657 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.688054 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.691438 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.807990 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.808174 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.808269 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-config-data\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.808344 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqht5\" (UniqueName: \"kubernetes.io/projected/61ef1c72-f256-4e8a-ad21-b4cae84753e5-kube-api-access-cqht5\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.808392 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ef1c72-f256-4e8a-ad21-b4cae84753e5-logs\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.862812 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0" path="/var/lib/kubelet/pods/b85eaeb5-6b2f-4381-bb2d-26877f2c8ff0/volumes" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.910170 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqht5\" (UniqueName: \"kubernetes.io/projected/61ef1c72-f256-4e8a-ad21-b4cae84753e5-kube-api-access-cqht5\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.910548 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ef1c72-f256-4e8a-ad21-b4cae84753e5-logs\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.910721 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.910915 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.911033 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ef1c72-f256-4e8a-ad21-b4cae84753e5-logs\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.911113 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-config-data\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.915636 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.916534 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-config-data\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.917019 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ef1c72-f256-4e8a-ad21-b4cae84753e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:50 crc kubenswrapper[4771]: I0129 09:28:50.945886 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqht5\" (UniqueName: \"kubernetes.io/projected/61ef1c72-f256-4e8a-ad21-b4cae84753e5-kube-api-access-cqht5\") pod \"nova-metadata-0\" (UID: \"61ef1c72-f256-4e8a-ad21-b4cae84753e5\") " pod="openstack/nova-metadata-0" Jan 29 09:28:51 crc kubenswrapper[4771]: I0129 09:28:51.001337 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:28:51 crc kubenswrapper[4771]: I0129 09:28:51.486965 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:28:51 crc kubenswrapper[4771]: W0129 09:28:51.490646 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61ef1c72_f256_4e8a_ad21_b4cae84753e5.slice/crio-77f6c8b1d3fcd0d13e2d5b73ed413ec7d31c324dcfa1d40c866131d82c77c198 WatchSource:0}: Error finding container 77f6c8b1d3fcd0d13e2d5b73ed413ec7d31c324dcfa1d40c866131d82c77c198: Status 404 returned error can't find the container with id 77f6c8b1d3fcd0d13e2d5b73ed413ec7d31c324dcfa1d40c866131d82c77c198 Jan 29 09:28:51 crc kubenswrapper[4771]: I0129 09:28:51.560376 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61ef1c72-f256-4e8a-ad21-b4cae84753e5","Type":"ContainerStarted","Data":"77f6c8b1d3fcd0d13e2d5b73ed413ec7d31c324dcfa1d40c866131d82c77c198"} Jan 29 09:28:52 crc kubenswrapper[4771]: I0129 09:28:52.574523 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61ef1c72-f256-4e8a-ad21-b4cae84753e5","Type":"ContainerStarted","Data":"fad8fd25d7bea14b863c8c0cabf24f85a13ef51c9bc86fceac2bc5a2e0cd70b8"} Jan 29 09:28:52 crc kubenswrapper[4771]: I0129 09:28:52.574902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"61ef1c72-f256-4e8a-ad21-b4cae84753e5","Type":"ContainerStarted","Data":"c4a79b41995b2a2ef018816eafceb456ff4348a9e4d57344f3a0732f71c263d3"} Jan 29 09:28:52 crc kubenswrapper[4771]: I0129 09:28:52.600344 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.600326468 podStartE2EDuration="2.600326468s" podCreationTimestamp="2026-01-29 09:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:28:52.593912542 +0000 UTC m=+1352.716752789" watchObservedRunningTime="2026-01-29 09:28:52.600326468 +0000 UTC m=+1352.723166695" Jan 29 09:28:53 crc kubenswrapper[4771]: I0129 09:28:53.914172 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:28:56 crc kubenswrapper[4771]: I0129 09:28:56.002188 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:28:56 crc kubenswrapper[4771]: I0129 09:28:56.002580 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:28:58 crc kubenswrapper[4771]: I0129 09:28:58.272266 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:28:58 crc kubenswrapper[4771]: I0129 09:28:58.272631 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:28:58 crc kubenswrapper[4771]: I0129 09:28:58.915267 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:28:58 crc kubenswrapper[4771]: I0129 09:28:58.949950 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:28:59 crc kubenswrapper[4771]: I0129 09:28:59.286854 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="460724f7-49b9-475d-a983-5ffa6998815d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:28:59 crc kubenswrapper[4771]: I0129 09:28:59.286946 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="460724f7-49b9-475d-a983-5ffa6998815d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:28:59 crc kubenswrapper[4771]: I0129 09:28:59.660169 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:29:01 crc kubenswrapper[4771]: I0129 09:29:01.002434 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:29:01 crc kubenswrapper[4771]: I0129 09:29:01.002533 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:29:02 crc kubenswrapper[4771]: I0129 09:29:02.014862 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="61ef1c72-f256-4e8a-ad21-b4cae84753e5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:29:02 crc kubenswrapper[4771]: I0129 09:29:02.014905 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="61ef1c72-f256-4e8a-ad21-b4cae84753e5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:29:07 crc kubenswrapper[4771]: I0129 09:29:07.775688 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 09:29:08 crc kubenswrapper[4771]: I0129 09:29:08.283099 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:29:08 crc kubenswrapper[4771]: I0129 09:29:08.283711 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:29:08 crc kubenswrapper[4771]: I0129 09:29:08.289127 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:29:08 crc kubenswrapper[4771]: I0129 09:29:08.291523 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:29:08 crc kubenswrapper[4771]: I0129 09:29:08.729175 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:29:08 crc kubenswrapper[4771]: I0129 09:29:08.739660 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:29:11 crc kubenswrapper[4771]: I0129 09:29:11.011387 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:29:11 crc kubenswrapper[4771]: I0129 09:29:11.011762 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:29:11 crc kubenswrapper[4771]: I0129 09:29:11.017575 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:29:11 crc kubenswrapper[4771]: I0129 09:29:11.020156 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:29:14 crc kubenswrapper[4771]: I0129 09:29:14.270969 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:29:14 crc kubenswrapper[4771]: I0129 09:29:14.271335 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:29:19 crc kubenswrapper[4771]: I0129 09:29:19.395555 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:29:20 crc kubenswrapper[4771]: I0129 09:29:20.237949 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:29:23 crc kubenswrapper[4771]: I0129 09:29:23.787515 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerName="rabbitmq" containerID="cri-o://c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5" gracePeriod=604796 Jan 29 09:29:23 crc kubenswrapper[4771]: I0129 09:29:23.918892 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 29 09:29:24 crc kubenswrapper[4771]: I0129 09:29:24.859878 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" containerName="rabbitmq" containerID="cri-o://ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf" gracePeriod=604796 Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.366069 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465601 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9abaa29e-0912-445b-a09f-5ce90865a13b-pod-info\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465666 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-plugins\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465733 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l4tp\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-kube-api-access-5l4tp\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465798 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-confd\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465821 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-plugins-conf\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465890 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-config-data\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465918 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-server-conf\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465957 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9abaa29e-0912-445b-a09f-5ce90865a13b-erlang-cookie-secret\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.465974 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.466001 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-tls\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.466040 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-erlang-cookie\") pod \"9abaa29e-0912-445b-a09f-5ce90865a13b\" (UID: \"9abaa29e-0912-445b-a09f-5ce90865a13b\") " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.466788 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.467016 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.467414 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.483258 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.486874 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9abaa29e-0912-445b-a09f-5ce90865a13b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.487033 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.502440 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-kube-api-access-5l4tp" (OuterVolumeSpecName: "kube-api-access-5l4tp") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "kube-api-access-5l4tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.502894 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9abaa29e-0912-445b-a09f-5ce90865a13b-pod-info" (OuterVolumeSpecName: "pod-info") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.572502 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-config-data" (OuterVolumeSpecName: "config-data") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.572951 4771 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9abaa29e-0912-445b-a09f-5ce90865a13b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.572986 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.572998 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l4tp\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-kube-api-access-5l4tp\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.573006 4771 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.573014 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.573021 4771 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9abaa29e-0912-445b-a09f-5ce90865a13b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.573050 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.573059 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.573067 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.656702 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.670269 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-server-conf" (OuterVolumeSpecName: "server-conf") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.674612 4771 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9abaa29e-0912-445b-a09f-5ce90865a13b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.674649 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.750860 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9abaa29e-0912-445b-a09f-5ce90865a13b" (UID: "9abaa29e-0912-445b-a09f-5ce90865a13b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.776062 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9abaa29e-0912-445b-a09f-5ce90865a13b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.995426 4771 generic.go:334] "Generic (PLEG): container finished" podID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerID="c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5" exitCode=0 Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.995467 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9abaa29e-0912-445b-a09f-5ce90865a13b","Type":"ContainerDied","Data":"c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5"} Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.995494 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9abaa29e-0912-445b-a09f-5ce90865a13b","Type":"ContainerDied","Data":"3fc2715288aff49f1a95f87411938857c6424bb586063dd449618347907bb346"} Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.995522 4771 scope.go:117] "RemoveContainer" containerID="c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5" Jan 29 09:29:30 crc kubenswrapper[4771]: I0129 09:29:30.995638 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.026415 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.036950 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.045799 4771 scope.go:117] "RemoveContainer" containerID="2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.076290 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:29:31 crc kubenswrapper[4771]: E0129 09:29:31.077639 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerName="rabbitmq" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.077782 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerName="rabbitmq" Jan 29 09:29:31 crc kubenswrapper[4771]: E0129 09:29:31.077916 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerName="setup-container" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.078000 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerName="setup-container" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.078302 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" containerName="rabbitmq" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.084000 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.088844 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.088993 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.089140 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.089343 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.089435 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.089638 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-px4wr" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.091499 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.136934 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.184918 4771 scope.go:117] "RemoveContainer" containerID="c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5" Jan 29 09:29:31 crc kubenswrapper[4771]: E0129 09:29:31.185612 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5\": container with ID starting with c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5 not found: ID does not exist" containerID="c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.185730 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5"} err="failed to get container status \"c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5\": rpc error: code = NotFound desc = could not find container \"c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5\": container with ID starting with c1a2e6f3ba22f0b984756f1427b80be3bc1cdaa468a3044eb477c88491e714f5 not found: ID does not exist" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.185768 4771 scope.go:117] "RemoveContainer" containerID="2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4" Jan 29 09:29:31 crc kubenswrapper[4771]: E0129 09:29:31.186051 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4\": container with ID starting with 2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4 not found: ID does not exist" containerID="2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.186084 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4"} err="failed to get container status \"2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4\": rpc error: code = NotFound desc = could not find container \"2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4\": container with ID starting with 2163c295b50e47b5eaa9ada281de8c9865c31f63e1ef06fae525a31a6b6125c4 not found: ID does not exist" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187203 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-config-data\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187276 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/222f1966-eb07-4bcb-986d-70287a36fc90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187335 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187391 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/222f1966-eb07-4bcb-986d-70287a36fc90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187440 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187664 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jzm4\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-kube-api-access-4jzm4\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187691 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187782 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187818 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187863 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.187894 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.289720 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.289786 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-config-data\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.289845 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/222f1966-eb07-4bcb-986d-70287a36fc90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.289904 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.289952 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/222f1966-eb07-4bcb-986d-70287a36fc90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.289997 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.290021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jzm4\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-kube-api-access-4jzm4\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.290045 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.290075 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.290115 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.290164 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.290252 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.291001 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-config-data\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.291257 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.291716 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.292191 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.292677 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/222f1966-eb07-4bcb-986d-70287a36fc90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.298678 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/222f1966-eb07-4bcb-986d-70287a36fc90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.299842 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.304479 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.305800 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/222f1966-eb07-4bcb-986d-70287a36fc90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.308197 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jzm4\" (UniqueName: \"kubernetes.io/projected/222f1966-eb07-4bcb-986d-70287a36fc90-kube-api-access-4jzm4\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.335983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"222f1966-eb07-4bcb-986d-70287a36fc90\") " pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.503506 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.525913 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.596526 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-tls\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.596597 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-confd\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.596657 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-plugins-conf\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.596735 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-pod-info\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.596794 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.596886 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-erlang-cookie\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.596935 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-plugins\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.596988 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-server-conf\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.597051 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-erlang-cookie-secret\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.597076 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv4gc\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-kube-api-access-wv4gc\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.597105 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-config-data\") pod \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\" (UID: \"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6\") " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.597732 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.598122 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.598140 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.611339 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.612376 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-kube-api-access-wv4gc" (OuterVolumeSpecName: "kube-api-access-wv4gc") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "kube-api-access-wv4gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.614655 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.616591 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.617769 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-pod-info" (OuterVolumeSpecName: "pod-info") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.643368 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-config-data" (OuterVolumeSpecName: "config-data") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.680728 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-server-conf" (OuterVolumeSpecName: "server-conf") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699433 4771 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699452 4771 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699472 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699482 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699493 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699501 4771 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699509 4771 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699518 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv4gc\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-kube-api-access-wv4gc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699526 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.699534 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.747914 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.753915 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" (UID: "f3061d0c-7a27-4062-b2a7-12f8a1e1fac6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.801368 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:31 crc kubenswrapper[4771]: I0129 09:29:31.801397 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.008387 4771 generic.go:334] "Generic (PLEG): container finished" podID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" containerID="ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf" exitCode=0 Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.008490 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.008524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6","Type":"ContainerDied","Data":"ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf"} Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.008880 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3061d0c-7a27-4062-b2a7-12f8a1e1fac6","Type":"ContainerDied","Data":"807ce9f538ce48bb57993bff1eab2a7316a1278943884a27e9847a3a91f180f4"} Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.008915 4771 scope.go:117] "RemoveContainer" containerID="ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.039396 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.047416 4771 scope.go:117] "RemoveContainer" containerID="a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.050105 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.061008 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.088562 4771 scope.go:117] "RemoveContainer" containerID="ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf" Jan 29 09:29:32 crc kubenswrapper[4771]: E0129 09:29:32.089041 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf\": container with ID starting with ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf not found: ID does not exist" containerID="ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.089087 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf"} err="failed to get container status \"ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf\": rpc error: code = NotFound desc = could not find container \"ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf\": container with ID starting with ccd95bcd3e350e6174b95be7f175ec2a595496fde2d444e191f19687297b80bf not found: ID does not exist" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.089110 4771 scope.go:117] "RemoveContainer" containerID="a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02" Jan 29 09:29:32 crc kubenswrapper[4771]: E0129 09:29:32.089425 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02\": container with ID starting with a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02 not found: ID does not exist" containerID="a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.089460 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02"} err="failed to get container status \"a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02\": rpc error: code = NotFound desc = could not find container \"a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02\": container with ID starting with a6027e7def9780b6c07327a3b07888d81ac21da1634fd727b4e2cb7acf700b02 not found: ID does not exist" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.092199 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:29:32 crc kubenswrapper[4771]: E0129 09:29:32.092809 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" containerName="setup-container" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.092829 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" containerName="setup-container" Jan 29 09:29:32 crc kubenswrapper[4771]: E0129 09:29:32.092849 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" containerName="rabbitmq" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.092855 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" containerName="rabbitmq" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.093015 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" containerName="rabbitmq" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.093977 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.100097 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.100339 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.100462 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.100596 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.100862 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.101043 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.101714 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-92qks" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.110722 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210325 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210365 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23244fea-bb17-4ba0-b353-d4f98af3d93d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210402 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23244fea-bb17-4ba0-b353-d4f98af3d93d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210449 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210482 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210503 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210528 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210571 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210595 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210613 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbhfl\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-kube-api-access-bbhfl\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.210660 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.313687 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314302 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23244fea-bb17-4ba0-b353-d4f98af3d93d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314374 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23244fea-bb17-4ba0-b353-d4f98af3d93d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314467 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314540 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314571 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314630 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314710 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314751 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314776 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbhfl\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-kube-api-access-bbhfl\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.314877 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.315563 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.315599 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.316216 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.316773 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.317026 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.317088 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23244fea-bb17-4ba0-b353-d4f98af3d93d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.321734 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23244fea-bb17-4ba0-b353-d4f98af3d93d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.322451 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.326398 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.333388 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23244fea-bb17-4ba0-b353-d4f98af3d93d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.340216 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbhfl\" (UniqueName: \"kubernetes.io/projected/23244fea-bb17-4ba0-b353-d4f98af3d93d-kube-api-access-bbhfl\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.350337 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23244fea-bb17-4ba0-b353-d4f98af3d93d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.480129 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.850310 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abaa29e-0912-445b-a09f-5ce90865a13b" path="/var/lib/kubelet/pods/9abaa29e-0912-445b-a09f-5ce90865a13b/volumes" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.851341 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3061d0c-7a27-4062-b2a7-12f8a1e1fac6" path="/var/lib/kubelet/pods/f3061d0c-7a27-4062-b2a7-12f8a1e1fac6/volumes" Jan 29 09:29:32 crc kubenswrapper[4771]: I0129 09:29:32.994947 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.022465 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23244fea-bb17-4ba0-b353-d4f98af3d93d","Type":"ContainerStarted","Data":"f37eb6bf6ba1c4723f1d641950f707badef93e6db3ecabdc25fca96f8d7bd63a"} Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.029534 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"222f1966-eb07-4bcb-986d-70287a36fc90","Type":"ContainerStarted","Data":"ac19b885ac5c241b04c736ae96ac483bf718ad38525d6137700dbf1c9ff73b5c"} Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.426924 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66c8d98dc5-z8wvg"] Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.428548 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.436198 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.466177 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66c8d98dc5-z8wvg"] Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.539979 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-openstack-edpm-ipam\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.540075 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-sb\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.540116 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-nb\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.540198 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6sc2\" (UniqueName: \"kubernetes.io/projected/8172daed-0429-485a-ae20-eaa64b1d7788-kube-api-access-p6sc2\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.540264 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-config\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.540283 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-swift-storage-0\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.540369 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-svc\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.641668 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-sb\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.641764 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-nb\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.641864 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6sc2\" (UniqueName: \"kubernetes.io/projected/8172daed-0429-485a-ae20-eaa64b1d7788-kube-api-access-p6sc2\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.641927 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-config\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.641949 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-swift-storage-0\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.641972 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-svc\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.642008 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-openstack-edpm-ipam\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.642765 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-nb\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.642801 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-openstack-edpm-ipam\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.643289 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-config\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.642801 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-sb\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.643520 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-swift-storage-0\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.643743 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-svc\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.671770 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6sc2\" (UniqueName: \"kubernetes.io/projected/8172daed-0429-485a-ae20-eaa64b1d7788-kube-api-access-p6sc2\") pod \"dnsmasq-dns-66c8d98dc5-z8wvg\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:33 crc kubenswrapper[4771]: I0129 09:29:33.759616 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:34 crc kubenswrapper[4771]: I0129 09:29:34.301579 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66c8d98dc5-z8wvg"] Jan 29 09:29:35 crc kubenswrapper[4771]: I0129 09:29:35.056381 4771 generic.go:334] "Generic (PLEG): container finished" podID="8172daed-0429-485a-ae20-eaa64b1d7788" containerID="8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89" exitCode=0 Jan 29 09:29:35 crc kubenswrapper[4771]: I0129 09:29:35.056522 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" event={"ID":"8172daed-0429-485a-ae20-eaa64b1d7788","Type":"ContainerDied","Data":"8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89"} Jan 29 09:29:35 crc kubenswrapper[4771]: I0129 09:29:35.056772 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" event={"ID":"8172daed-0429-485a-ae20-eaa64b1d7788","Type":"ContainerStarted","Data":"88c3bdeffe03abb3657b749b98fd1c45c5291accdb92bfa6d46d9b9292c78e19"} Jan 29 09:29:35 crc kubenswrapper[4771]: I0129 09:29:35.060350 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23244fea-bb17-4ba0-b353-d4f98af3d93d","Type":"ContainerStarted","Data":"604581e4b2a6ff3be9e107c8d70f60f337b1fa49713e5685ba7b74f7ece21ff3"} Jan 29 09:29:35 crc kubenswrapper[4771]: I0129 09:29:35.063090 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"222f1966-eb07-4bcb-986d-70287a36fc90","Type":"ContainerStarted","Data":"78b68f49107e685e13634a74da7a1b02c5b8aef2ed2bfc65937f7fae98aeaee7"} Jan 29 09:29:36 crc kubenswrapper[4771]: I0129 09:29:36.094266 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" event={"ID":"8172daed-0429-485a-ae20-eaa64b1d7788","Type":"ContainerStarted","Data":"c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485"} Jan 29 09:29:36 crc kubenswrapper[4771]: I0129 09:29:36.095272 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:36 crc kubenswrapper[4771]: I0129 09:29:36.127036 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" podStartSLOduration=3.12701633 podStartE2EDuration="3.12701633s" podCreationTimestamp="2026-01-29 09:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:36.12149882 +0000 UTC m=+1396.244339077" watchObservedRunningTime="2026-01-29 09:29:36.12701633 +0000 UTC m=+1396.249856557" Jan 29 09:29:43 crc kubenswrapper[4771]: I0129 09:29:43.761167 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:43 crc kubenswrapper[4771]: I0129 09:29:43.827927 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fcbbbc747-4d9v4"] Jan 29 09:29:43 crc kubenswrapper[4771]: I0129 09:29:43.828179 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" podUID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" containerName="dnsmasq-dns" containerID="cri-o://9642996952f35979ec9f4d4521e88ca03df13454a59ba8bc0cdf93c623d8b39a" gracePeriod=10 Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.003863 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578c4b6ff9-9qfgr"] Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.012984 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.029480 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578c4b6ff9-9qfgr"] Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.169850 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-dns-swift-storage-0\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.169926 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-dns-svc\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.169960 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-ovsdbserver-nb\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.170017 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-config\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.170148 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-ovsdbserver-sb\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.170372 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-openstack-edpm-ipam\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.170602 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz2v9\" (UniqueName: \"kubernetes.io/projected/b23b9082-b814-455c-a31b-8df578081bf4-kube-api-access-pz2v9\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.174354 4771 generic.go:334] "Generic (PLEG): container finished" podID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" containerID="9642996952f35979ec9f4d4521e88ca03df13454a59ba8bc0cdf93c623d8b39a" exitCode=0 Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.174394 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" event={"ID":"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6","Type":"ContainerDied","Data":"9642996952f35979ec9f4d4521e88ca03df13454a59ba8bc0cdf93c623d8b39a"} Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273045 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-ovsdbserver-nb\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273130 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-config\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273157 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-ovsdbserver-sb\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273232 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-openstack-edpm-ipam\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273309 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz2v9\" (UniqueName: \"kubernetes.io/projected/b23b9082-b814-455c-a31b-8df578081bf4-kube-api-access-pz2v9\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273406 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-dns-swift-storage-0\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273434 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-dns-svc\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273220 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.273869 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.274595 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-dns-svc\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.274669 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-ovsdbserver-nb\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.274760 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-config\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.275033 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-ovsdbserver-sb\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.275884 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-dns-swift-storage-0\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.277395 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b23b9082-b814-455c-a31b-8df578081bf4-openstack-edpm-ipam\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.303295 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz2v9\" (UniqueName: \"kubernetes.io/projected/b23b9082-b814-455c-a31b-8df578081bf4-kube-api-access-pz2v9\") pod \"dnsmasq-dns-578c4b6ff9-9qfgr\" (UID: \"b23b9082-b814-455c-a31b-8df578081bf4\") " pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:44 crc kubenswrapper[4771]: I0129 09:29:44.353103 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.450802 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.583505 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-sb\") pod \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.583604 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-nb\") pod \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.583758 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8plk\" (UniqueName: \"kubernetes.io/projected/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-kube-api-access-c8plk\") pod \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.583936 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-swift-storage-0\") pod \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.584097 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-config\") pod \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.584160 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-svc\") pod \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\" (UID: \"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6\") " Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.589326 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-kube-api-access-c8plk" (OuterVolumeSpecName: "kube-api-access-c8plk") pod "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" (UID: "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6"). InnerVolumeSpecName "kube-api-access-c8plk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.642851 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" (UID: "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.654063 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-config" (OuterVolumeSpecName: "config") pod "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" (UID: "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.660474 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" (UID: "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.684967 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" (UID: "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.689561 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.689589 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8plk\" (UniqueName: \"kubernetes.io/projected/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-kube-api-access-c8plk\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.689601 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.689610 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.689619 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.691852 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" (UID: "3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:44.792163 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:45.208790 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" event={"ID":"3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6","Type":"ContainerDied","Data":"22313b0fa27453821c0cf1b0b7552af6171fac04618502865a3f865d609e28f2"} Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:45.208855 4771 scope.go:117] "RemoveContainer" containerID="9642996952f35979ec9f4d4521e88ca03df13454a59ba8bc0cdf93c623d8b39a" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:45.209056 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fcbbbc747-4d9v4" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:45.244127 4771 scope.go:117] "RemoveContainer" containerID="9dbd6eaf1f88ff35516391d2b216165630bc9ff3f19b70a006fc09f46b25a940" Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:45.247590 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fcbbbc747-4d9v4"] Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:45.256648 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fcbbbc747-4d9v4"] Jan 29 09:29:45 crc kubenswrapper[4771]: I0129 09:29:45.468981 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578c4b6ff9-9qfgr"] Jan 29 09:29:45 crc kubenswrapper[4771]: W0129 09:29:45.472131 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb23b9082_b814_455c_a31b_8df578081bf4.slice/crio-ee3155320b113bc5029c6df2f0e250b1c83940ff68e241ff5db092b577d5861e WatchSource:0}: Error finding container ee3155320b113bc5029c6df2f0e250b1c83940ff68e241ff5db092b577d5861e: Status 404 returned error can't find the container with id ee3155320b113bc5029c6df2f0e250b1c83940ff68e241ff5db092b577d5861e Jan 29 09:29:46 crc kubenswrapper[4771]: I0129 09:29:46.219008 4771 generic.go:334] "Generic (PLEG): container finished" podID="b23b9082-b814-455c-a31b-8df578081bf4" containerID="e13eac7640d8b3e95226f62063db2fbd3ae2d1b186ad4a61b61fac6da213ca38" exitCode=0 Jan 29 09:29:46 crc kubenswrapper[4771]: I0129 09:29:46.219169 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" event={"ID":"b23b9082-b814-455c-a31b-8df578081bf4","Type":"ContainerDied","Data":"e13eac7640d8b3e95226f62063db2fbd3ae2d1b186ad4a61b61fac6da213ca38"} Jan 29 09:29:46 crc kubenswrapper[4771]: I0129 09:29:46.219315 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" event={"ID":"b23b9082-b814-455c-a31b-8df578081bf4","Type":"ContainerStarted","Data":"ee3155320b113bc5029c6df2f0e250b1c83940ff68e241ff5db092b577d5861e"} Jan 29 09:29:46 crc kubenswrapper[4771]: I0129 09:29:46.849886 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" path="/var/lib/kubelet/pods/3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6/volumes" Jan 29 09:29:47 crc kubenswrapper[4771]: I0129 09:29:47.244147 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" event={"ID":"b23b9082-b814-455c-a31b-8df578081bf4","Type":"ContainerStarted","Data":"696e49f08e47b1e96337849191b8b24c02e404fbcf94338e053bc38571945acf"} Jan 29 09:29:47 crc kubenswrapper[4771]: I0129 09:29:47.244330 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:47 crc kubenswrapper[4771]: I0129 09:29:47.269784 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" podStartSLOduration=4.269763399 podStartE2EDuration="4.269763399s" podCreationTimestamp="2026-01-29 09:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:29:47.267543888 +0000 UTC m=+1407.390384125" watchObservedRunningTime="2026-01-29 09:29:47.269763399 +0000 UTC m=+1407.392603626" Jan 29 09:29:54 crc kubenswrapper[4771]: I0129 09:29:54.355965 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578c4b6ff9-9qfgr" Jan 29 09:29:54 crc kubenswrapper[4771]: I0129 09:29:54.442203 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66c8d98dc5-z8wvg"] Jan 29 09:29:54 crc kubenswrapper[4771]: I0129 09:29:54.442563 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" podUID="8172daed-0429-485a-ae20-eaa64b1d7788" containerName="dnsmasq-dns" containerID="cri-o://c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485" gracePeriod=10 Jan 29 09:29:54 crc kubenswrapper[4771]: I0129 09:29:54.901066 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.015782 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-nb\") pod \"8172daed-0429-485a-ae20-eaa64b1d7788\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.015889 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-svc\") pod \"8172daed-0429-485a-ae20-eaa64b1d7788\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.015974 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-config\") pod \"8172daed-0429-485a-ae20-eaa64b1d7788\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.015991 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-sb\") pod \"8172daed-0429-485a-ae20-eaa64b1d7788\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.016024 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6sc2\" (UniqueName: \"kubernetes.io/projected/8172daed-0429-485a-ae20-eaa64b1d7788-kube-api-access-p6sc2\") pod \"8172daed-0429-485a-ae20-eaa64b1d7788\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.016063 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-swift-storage-0\") pod \"8172daed-0429-485a-ae20-eaa64b1d7788\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.016084 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-openstack-edpm-ipam\") pod \"8172daed-0429-485a-ae20-eaa64b1d7788\" (UID: \"8172daed-0429-485a-ae20-eaa64b1d7788\") " Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.021771 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8172daed-0429-485a-ae20-eaa64b1d7788-kube-api-access-p6sc2" (OuterVolumeSpecName: "kube-api-access-p6sc2") pod "8172daed-0429-485a-ae20-eaa64b1d7788" (UID: "8172daed-0429-485a-ae20-eaa64b1d7788"). InnerVolumeSpecName "kube-api-access-p6sc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.073609 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8172daed-0429-485a-ae20-eaa64b1d7788" (UID: "8172daed-0429-485a-ae20-eaa64b1d7788"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.082054 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8172daed-0429-485a-ae20-eaa64b1d7788" (UID: "8172daed-0429-485a-ae20-eaa64b1d7788"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.083109 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-config" (OuterVolumeSpecName: "config") pod "8172daed-0429-485a-ae20-eaa64b1d7788" (UID: "8172daed-0429-485a-ae20-eaa64b1d7788"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.083895 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "8172daed-0429-485a-ae20-eaa64b1d7788" (UID: "8172daed-0429-485a-ae20-eaa64b1d7788"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.088318 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8172daed-0429-485a-ae20-eaa64b1d7788" (UID: "8172daed-0429-485a-ae20-eaa64b1d7788"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.104777 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8172daed-0429-485a-ae20-eaa64b1d7788" (UID: "8172daed-0429-485a-ae20-eaa64b1d7788"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.118615 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.118770 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.118785 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.118798 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6sc2\" (UniqueName: \"kubernetes.io/projected/8172daed-0429-485a-ae20-eaa64b1d7788-kube-api-access-p6sc2\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.118828 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.118836 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.118844 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8172daed-0429-485a-ae20-eaa64b1d7788-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.328077 4771 generic.go:334] "Generic (PLEG): container finished" podID="8172daed-0429-485a-ae20-eaa64b1d7788" containerID="c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485" exitCode=0 Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.328125 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" event={"ID":"8172daed-0429-485a-ae20-eaa64b1d7788","Type":"ContainerDied","Data":"c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485"} Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.328158 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" event={"ID":"8172daed-0429-485a-ae20-eaa64b1d7788","Type":"ContainerDied","Data":"88c3bdeffe03abb3657b749b98fd1c45c5291accdb92bfa6d46d9b9292c78e19"} Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.328182 4771 scope.go:117] "RemoveContainer" containerID="c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.328200 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c8d98dc5-z8wvg" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.395765 4771 scope.go:117] "RemoveContainer" containerID="8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.417753 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66c8d98dc5-z8wvg"] Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.429254 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66c8d98dc5-z8wvg"] Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.443133 4771 scope.go:117] "RemoveContainer" containerID="c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485" Jan 29 09:29:55 crc kubenswrapper[4771]: E0129 09:29:55.453161 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485\": container with ID starting with c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485 not found: ID does not exist" containerID="c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.453257 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485"} err="failed to get container status \"c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485\": rpc error: code = NotFound desc = could not find container \"c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485\": container with ID starting with c8ac331a1798c4da27bfd654286db7f77037998983b041dc46ad2c70f2f7a485 not found: ID does not exist" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.453325 4771 scope.go:117] "RemoveContainer" containerID="8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89" Jan 29 09:29:55 crc kubenswrapper[4771]: E0129 09:29:55.456100 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89\": container with ID starting with 8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89 not found: ID does not exist" containerID="8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.456148 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89"} err="failed to get container status \"8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89\": rpc error: code = NotFound desc = could not find container \"8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89\": container with ID starting with 8d6f71babac0327544f040b3eb086a6e8308efa0252587cf0469763056beba89 not found: ID does not exist" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.866099 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-65299"] Jan 29 09:29:55 crc kubenswrapper[4771]: E0129 09:29:55.866522 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" containerName="init" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.866546 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" containerName="init" Jan 29 09:29:55 crc kubenswrapper[4771]: E0129 09:29:55.866561 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" containerName="dnsmasq-dns" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.866568 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" containerName="dnsmasq-dns" Jan 29 09:29:55 crc kubenswrapper[4771]: E0129 09:29:55.866577 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8172daed-0429-485a-ae20-eaa64b1d7788" containerName="init" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.866584 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8172daed-0429-485a-ae20-eaa64b1d7788" containerName="init" Jan 29 09:29:55 crc kubenswrapper[4771]: E0129 09:29:55.866616 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8172daed-0429-485a-ae20-eaa64b1d7788" containerName="dnsmasq-dns" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.866622 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8172daed-0429-485a-ae20-eaa64b1d7788" containerName="dnsmasq-dns" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.866820 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bcafb82-4d5d-4bf1-8739-2cbc6024e5e6" containerName="dnsmasq-dns" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.866832 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8172daed-0429-485a-ae20-eaa64b1d7788" containerName="dnsmasq-dns" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.868077 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.873442 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65299"] Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.932239 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm8kh\" (UniqueName: \"kubernetes.io/projected/668d1e64-b62d-41a4-93c3-a43725f80a0f-kube-api-access-xm8kh\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.932365 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-catalog-content\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:55 crc kubenswrapper[4771]: I0129 09:29:55.932465 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-utilities\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.034563 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-catalog-content\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.034669 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-utilities\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.034754 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm8kh\" (UniqueName: \"kubernetes.io/projected/668d1e64-b62d-41a4-93c3-a43725f80a0f-kube-api-access-xm8kh\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.035215 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-catalog-content\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.035275 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-utilities\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.054316 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm8kh\" (UniqueName: \"kubernetes.io/projected/668d1e64-b62d-41a4-93c3-a43725f80a0f-kube-api-access-xm8kh\") pod \"redhat-operators-65299\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.185103 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.675552 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65299"] Jan 29 09:29:56 crc kubenswrapper[4771]: I0129 09:29:56.871307 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8172daed-0429-485a-ae20-eaa64b1d7788" path="/var/lib/kubelet/pods/8172daed-0429-485a-ae20-eaa64b1d7788/volumes" Jan 29 09:29:57 crc kubenswrapper[4771]: I0129 09:29:57.347267 4771 generic.go:334] "Generic (PLEG): container finished" podID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerID="04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02" exitCode=0 Jan 29 09:29:57 crc kubenswrapper[4771]: I0129 09:29:57.347303 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65299" event={"ID":"668d1e64-b62d-41a4-93c3-a43725f80a0f","Type":"ContainerDied","Data":"04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02"} Jan 29 09:29:57 crc kubenswrapper[4771]: I0129 09:29:57.347327 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65299" event={"ID":"668d1e64-b62d-41a4-93c3-a43725f80a0f","Type":"ContainerStarted","Data":"e8fa4663106d1aa323f81597199f1e477688dbeedd8b4ea1259c98cff0356e18"} Jan 29 09:29:58 crc kubenswrapper[4771]: I0129 09:29:58.359656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65299" event={"ID":"668d1e64-b62d-41a4-93c3-a43725f80a0f","Type":"ContainerStarted","Data":"6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6"} Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.159022 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6"] Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.166612 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.169681 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.179066 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.193833 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6"] Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.326737 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-config-volume\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.326785 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9w5\" (UniqueName: \"kubernetes.io/projected/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-kube-api-access-6w9w5\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.326816 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-secret-volume\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.378017 4771 generic.go:334] "Generic (PLEG): container finished" podID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerID="6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6" exitCode=0 Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.378056 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65299" event={"ID":"668d1e64-b62d-41a4-93c3-a43725f80a0f","Type":"ContainerDied","Data":"6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6"} Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.429070 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-config-volume\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.429127 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w9w5\" (UniqueName: \"kubernetes.io/projected/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-kube-api-access-6w9w5\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.429170 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-secret-volume\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.430046 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-config-volume\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.434899 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-secret-volume\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.445755 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w9w5\" (UniqueName: \"kubernetes.io/projected/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-kube-api-access-6w9w5\") pod \"collect-profiles-29494650-tknk6\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.482240 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:00 crc kubenswrapper[4771]: I0129 09:30:00.933991 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6"] Jan 29 09:30:00 crc kubenswrapper[4771]: W0129 09:30:00.948462 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c1710e_3c07_44fb_9ab4_4d346e0f02e3.slice/crio-e56413d87fc2de34554ca49f8ee3b6cf6f0c48158651d2631d83ed550f7f8ed1 WatchSource:0}: Error finding container e56413d87fc2de34554ca49f8ee3b6cf6f0c48158651d2631d83ed550f7f8ed1: Status 404 returned error can't find the container with id e56413d87fc2de34554ca49f8ee3b6cf6f0c48158651d2631d83ed550f7f8ed1 Jan 29 09:30:01 crc kubenswrapper[4771]: I0129 09:30:01.388919 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" event={"ID":"16c1710e-3c07-44fb-9ab4-4d346e0f02e3","Type":"ContainerStarted","Data":"e56413d87fc2de34554ca49f8ee3b6cf6f0c48158651d2631d83ed550f7f8ed1"} Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.401390 4771 generic.go:334] "Generic (PLEG): container finished" podID="16c1710e-3c07-44fb-9ab4-4d346e0f02e3" containerID="d2cab39e5c6bcadbbfd6ef771c8824e708f2f2a4dd0450f4eb03ff98e907fee7" exitCode=0 Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.401503 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" event={"ID":"16c1710e-3c07-44fb-9ab4-4d346e0f02e3","Type":"ContainerDied","Data":"d2cab39e5c6bcadbbfd6ef771c8824e708f2f2a4dd0450f4eb03ff98e907fee7"} Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.935069 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6"] Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.936872 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.940074 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.940136 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.940302 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.944036 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:30:02 crc kubenswrapper[4771]: I0129 09:30:02.952542 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6"] Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.090252 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m28br\" (UniqueName: \"kubernetes.io/projected/11b71af0-5437-4b20-a2a0-68897b1f8c78-kube-api-access-m28br\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.090331 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.090358 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.090437 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.192580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m28br\" (UniqueName: \"kubernetes.io/projected/11b71af0-5437-4b20-a2a0-68897b1f8c78-kube-api-access-m28br\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.192690 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.192763 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.192892 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.198446 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.198787 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.199523 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.213854 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m28br\" (UniqueName: \"kubernetes.io/projected/11b71af0-5437-4b20-a2a0-68897b1f8c78-kube-api-access-m28br\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.255488 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.423685 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65299" event={"ID":"668d1e64-b62d-41a4-93c3-a43725f80a0f","Type":"ContainerStarted","Data":"f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d"} Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.446797 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-65299" podStartSLOduration=3.611254885 podStartE2EDuration="8.446778806s" podCreationTimestamp="2026-01-29 09:29:55 +0000 UTC" firstStartedPulling="2026-01-29 09:29:57.348943935 +0000 UTC m=+1417.471784162" lastFinishedPulling="2026-01-29 09:30:02.184467836 +0000 UTC m=+1422.307308083" observedRunningTime="2026-01-29 09:30:03.443087155 +0000 UTC m=+1423.565927382" watchObservedRunningTime="2026-01-29 09:30:03.446778806 +0000 UTC m=+1423.569619033" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.773492 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.832352 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6"] Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.905616 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w9w5\" (UniqueName: \"kubernetes.io/projected/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-kube-api-access-6w9w5\") pod \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.905786 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-secret-volume\") pod \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.905824 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-config-volume\") pod \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\" (UID: \"16c1710e-3c07-44fb-9ab4-4d346e0f02e3\") " Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.906820 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "16c1710e-3c07-44fb-9ab4-4d346e0f02e3" (UID: "16c1710e-3c07-44fb-9ab4-4d346e0f02e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.912947 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-kube-api-access-6w9w5" (OuterVolumeSpecName: "kube-api-access-6w9w5") pod "16c1710e-3c07-44fb-9ab4-4d346e0f02e3" (UID: "16c1710e-3c07-44fb-9ab4-4d346e0f02e3"). InnerVolumeSpecName "kube-api-access-6w9w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:03 crc kubenswrapper[4771]: I0129 09:30:03.914174 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16c1710e-3c07-44fb-9ab4-4d346e0f02e3" (UID: "16c1710e-3c07-44fb-9ab4-4d346e0f02e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[4771]: I0129 09:30:04.007651 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w9w5\" (UniqueName: \"kubernetes.io/projected/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-kube-api-access-6w9w5\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[4771]: I0129 09:30:04.007729 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[4771]: I0129 09:30:04.007739 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c1710e-3c07-44fb-9ab4-4d346e0f02e3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[4771]: I0129 09:30:04.436448 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" event={"ID":"16c1710e-3c07-44fb-9ab4-4d346e0f02e3","Type":"ContainerDied","Data":"e56413d87fc2de34554ca49f8ee3b6cf6f0c48158651d2631d83ed550f7f8ed1"} Jan 29 09:30:04 crc kubenswrapper[4771]: I0129 09:30:04.436493 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e56413d87fc2de34554ca49f8ee3b6cf6f0c48158651d2631d83ed550f7f8ed1" Jan 29 09:30:04 crc kubenswrapper[4771]: I0129 09:30:04.436513 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6" Jan 29 09:30:04 crc kubenswrapper[4771]: I0129 09:30:04.447905 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" event={"ID":"11b71af0-5437-4b20-a2a0-68897b1f8c78","Type":"ContainerStarted","Data":"71449f778ffc230db4d822cc5c3a07fbc4168f6a356a54ea4797d92183793858"} Jan 29 09:30:06 crc kubenswrapper[4771]: I0129 09:30:06.186402 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:30:06 crc kubenswrapper[4771]: I0129 09:30:06.189233 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:30:07 crc kubenswrapper[4771]: I0129 09:30:07.268549 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-65299" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="registry-server" probeResult="failure" output=< Jan 29 09:30:07 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:30:07 crc kubenswrapper[4771]: > Jan 29 09:30:07 crc kubenswrapper[4771]: I0129 09:30:07.488077 4771 generic.go:334] "Generic (PLEG): container finished" podID="23244fea-bb17-4ba0-b353-d4f98af3d93d" containerID="604581e4b2a6ff3be9e107c8d70f60f337b1fa49713e5685ba7b74f7ece21ff3" exitCode=0 Jan 29 09:30:07 crc kubenswrapper[4771]: I0129 09:30:07.488143 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23244fea-bb17-4ba0-b353-d4f98af3d93d","Type":"ContainerDied","Data":"604581e4b2a6ff3be9e107c8d70f60f337b1fa49713e5685ba7b74f7ece21ff3"} Jan 29 09:30:07 crc kubenswrapper[4771]: I0129 09:30:07.491050 4771 generic.go:334] "Generic (PLEG): container finished" podID="222f1966-eb07-4bcb-986d-70287a36fc90" containerID="78b68f49107e685e13634a74da7a1b02c5b8aef2ed2bfc65937f7fae98aeaee7" exitCode=0 Jan 29 09:30:07 crc kubenswrapper[4771]: I0129 09:30:07.491071 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"222f1966-eb07-4bcb-986d-70287a36fc90","Type":"ContainerDied","Data":"78b68f49107e685e13634a74da7a1b02c5b8aef2ed2bfc65937f7fae98aeaee7"} Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.271202 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.271797 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.271843 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.272657 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"182b687cc8105c26ca625b4fc83c6757431c23f77bbdc17b6ccd0dabae3f7a24"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.272747 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://182b687cc8105c26ca625b4fc83c6757431c23f77bbdc17b6ccd0dabae3f7a24" gracePeriod=600 Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.577767 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" event={"ID":"11b71af0-5437-4b20-a2a0-68897b1f8c78","Type":"ContainerStarted","Data":"af4bda2f6ca7af75374a28a2700179f88022bb64d803c87363cc2fa1acaa160e"} Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.585154 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"222f1966-eb07-4bcb-986d-70287a36fc90","Type":"ContainerStarted","Data":"77540d09aae7a92734883082790481536134061afd014e84436016a469d3c5b1"} Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.586146 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.588224 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23244fea-bb17-4ba0-b353-d4f98af3d93d","Type":"ContainerStarted","Data":"88f0ce0bd5581d3d9646793bbc193c0d8aa23e17ced947cf1fad0a6882817c76"} Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.588543 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.595103 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" podStartSLOduration=2.797224387 podStartE2EDuration="12.595081885s" podCreationTimestamp="2026-01-29 09:30:02 +0000 UTC" firstStartedPulling="2026-01-29 09:30:03.834573097 +0000 UTC m=+1423.957413324" lastFinishedPulling="2026-01-29 09:30:13.632430595 +0000 UTC m=+1433.755270822" observedRunningTime="2026-01-29 09:30:14.59416044 +0000 UTC m=+1434.717000667" watchObservedRunningTime="2026-01-29 09:30:14.595081885 +0000 UTC m=+1434.717922112" Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.596817 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="182b687cc8105c26ca625b4fc83c6757431c23f77bbdc17b6ccd0dabae3f7a24" exitCode=0 Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.596867 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"182b687cc8105c26ca625b4fc83c6757431c23f77bbdc17b6ccd0dabae3f7a24"} Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.596906 4771 scope.go:117] "RemoveContainer" containerID="1b3ab7b9f2df880b7295ed04362ae5024763966067568a703d675444eb8c341b" Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.625860 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.625844643 podStartE2EDuration="43.625844643s" podCreationTimestamp="2026-01-29 09:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:30:14.618231536 +0000 UTC m=+1434.741071763" watchObservedRunningTime="2026-01-29 09:30:14.625844643 +0000 UTC m=+1434.748684870" Jan 29 09:30:14 crc kubenswrapper[4771]: I0129 09:30:14.647758 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.64773888 podStartE2EDuration="42.64773888s" podCreationTimestamp="2026-01-29 09:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:30:14.63672544 +0000 UTC m=+1434.759565667" watchObservedRunningTime="2026-01-29 09:30:14.64773888 +0000 UTC m=+1434.770579107" Jan 29 09:30:15 crc kubenswrapper[4771]: I0129 09:30:15.607485 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4"} Jan 29 09:30:17 crc kubenswrapper[4771]: I0129 09:30:17.237212 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-65299" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="registry-server" probeResult="failure" output=< Jan 29 09:30:17 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:30:17 crc kubenswrapper[4771]: > Jan 29 09:30:25 crc kubenswrapper[4771]: I0129 09:30:25.714316 4771 generic.go:334] "Generic (PLEG): container finished" podID="11b71af0-5437-4b20-a2a0-68897b1f8c78" containerID="af4bda2f6ca7af75374a28a2700179f88022bb64d803c87363cc2fa1acaa160e" exitCode=0 Jan 29 09:30:25 crc kubenswrapper[4771]: I0129 09:30:25.714412 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" event={"ID":"11b71af0-5437-4b20-a2a0-68897b1f8c78","Type":"ContainerDied","Data":"af4bda2f6ca7af75374a28a2700179f88022bb64d803c87363cc2fa1acaa160e"} Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.258364 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-65299" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="registry-server" probeResult="failure" output=< Jan 29 09:30:27 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:30:27 crc kubenswrapper[4771]: > Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.266003 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.309175 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-ssh-key-openstack-edpm-ipam\") pod \"11b71af0-5437-4b20-a2a0-68897b1f8c78\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.309664 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-repo-setup-combined-ca-bundle\") pod \"11b71af0-5437-4b20-a2a0-68897b1f8c78\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.309825 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-inventory\") pod \"11b71af0-5437-4b20-a2a0-68897b1f8c78\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.309932 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m28br\" (UniqueName: \"kubernetes.io/projected/11b71af0-5437-4b20-a2a0-68897b1f8c78-kube-api-access-m28br\") pod \"11b71af0-5437-4b20-a2a0-68897b1f8c78\" (UID: \"11b71af0-5437-4b20-a2a0-68897b1f8c78\") " Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.315770 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "11b71af0-5437-4b20-a2a0-68897b1f8c78" (UID: "11b71af0-5437-4b20-a2a0-68897b1f8c78"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.315936 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b71af0-5437-4b20-a2a0-68897b1f8c78-kube-api-access-m28br" (OuterVolumeSpecName: "kube-api-access-m28br") pod "11b71af0-5437-4b20-a2a0-68897b1f8c78" (UID: "11b71af0-5437-4b20-a2a0-68897b1f8c78"). InnerVolumeSpecName "kube-api-access-m28br". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.337765 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-inventory" (OuterVolumeSpecName: "inventory") pod "11b71af0-5437-4b20-a2a0-68897b1f8c78" (UID: "11b71af0-5437-4b20-a2a0-68897b1f8c78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.351932 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11b71af0-5437-4b20-a2a0-68897b1f8c78" (UID: "11b71af0-5437-4b20-a2a0-68897b1f8c78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.412678 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.412740 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m28br\" (UniqueName: \"kubernetes.io/projected/11b71af0-5437-4b20-a2a0-68897b1f8c78-kube-api-access-m28br\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.412754 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.412766 4771 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b71af0-5437-4b20-a2a0-68897b1f8c78-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.738411 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" event={"ID":"11b71af0-5437-4b20-a2a0-68897b1f8c78","Type":"ContainerDied","Data":"71449f778ffc230db4d822cc5c3a07fbc4168f6a356a54ea4797d92183793858"} Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.738473 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71449f778ffc230db4d822cc5c3a07fbc4168f6a356a54ea4797d92183793858" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.738520 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.833751 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg"] Jan 29 09:30:27 crc kubenswrapper[4771]: E0129 09:30:27.834205 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b71af0-5437-4b20-a2a0-68897b1f8c78" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.834227 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b71af0-5437-4b20-a2a0-68897b1f8c78" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:30:27 crc kubenswrapper[4771]: E0129 09:30:27.834249 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c1710e-3c07-44fb-9ab4-4d346e0f02e3" containerName="collect-profiles" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.834258 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c1710e-3c07-44fb-9ab4-4d346e0f02e3" containerName="collect-profiles" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.834506 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c1710e-3c07-44fb-9ab4-4d346e0f02e3" containerName="collect-profiles" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.834543 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b71af0-5437-4b20-a2a0-68897b1f8c78" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.835258 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.839023 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.839108 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.839192 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.839857 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.861215 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg"] Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.924818 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.924979 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p4nf\" (UniqueName: \"kubernetes.io/projected/1861fab3-9c27-41e5-b792-6df4cb346a1d-kube-api-access-7p4nf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:27 crc kubenswrapper[4771]: I0129 09:30:27.925138 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.026506 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.026572 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.026647 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p4nf\" (UniqueName: \"kubernetes.io/projected/1861fab3-9c27-41e5-b792-6df4cb346a1d-kube-api-access-7p4nf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.031270 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.031434 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.054685 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p4nf\" (UniqueName: \"kubernetes.io/projected/1861fab3-9c27-41e5-b792-6df4cb346a1d-kube-api-access-7p4nf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4hnbg\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.154725 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.730234 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg"] Jan 29 09:30:28 crc kubenswrapper[4771]: W0129 09:30:28.736274 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1861fab3_9c27_41e5_b792_6df4cb346a1d.slice/crio-631e012a8bfa2053237e5d2e0b2b0b425abe5464d252ff64239e651da3132b81 WatchSource:0}: Error finding container 631e012a8bfa2053237e5d2e0b2b0b425abe5464d252ff64239e651da3132b81: Status 404 returned error can't find the container with id 631e012a8bfa2053237e5d2e0b2b0b425abe5464d252ff64239e651da3132b81 Jan 29 09:30:28 crc kubenswrapper[4771]: I0129 09:30:28.750525 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" event={"ID":"1861fab3-9c27-41e5-b792-6df4cb346a1d","Type":"ContainerStarted","Data":"631e012a8bfa2053237e5d2e0b2b0b425abe5464d252ff64239e651da3132b81"} Jan 29 09:30:29 crc kubenswrapper[4771]: I0129 09:30:29.768909 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" event={"ID":"1861fab3-9c27-41e5-b792-6df4cb346a1d","Type":"ContainerStarted","Data":"3a0d60dfede8e84b308c2b30589ffc9daf1fdc0746d48e041e2231dcf7a6b3f7"} Jan 29 09:30:29 crc kubenswrapper[4771]: I0129 09:30:29.806564 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" podStartSLOduration=2.376683256 podStartE2EDuration="2.806522332s" podCreationTimestamp="2026-01-29 09:30:27 +0000 UTC" firstStartedPulling="2026-01-29 09:30:28.740686429 +0000 UTC m=+1448.863526676" lastFinishedPulling="2026-01-29 09:30:29.170525525 +0000 UTC m=+1449.293365752" observedRunningTime="2026-01-29 09:30:29.797066854 +0000 UTC m=+1449.919907081" watchObservedRunningTime="2026-01-29 09:30:29.806522332 +0000 UTC m=+1449.929362599" Jan 29 09:30:31 crc kubenswrapper[4771]: I0129 09:30:31.532064 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 09:30:32 crc kubenswrapper[4771]: I0129 09:30:32.483103 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:30:32 crc kubenswrapper[4771]: I0129 09:30:32.805714 4771 generic.go:334] "Generic (PLEG): container finished" podID="1861fab3-9c27-41e5-b792-6df4cb346a1d" containerID="3a0d60dfede8e84b308c2b30589ffc9daf1fdc0746d48e041e2231dcf7a6b3f7" exitCode=0 Jan 29 09:30:32 crc kubenswrapper[4771]: I0129 09:30:32.805965 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" event={"ID":"1861fab3-9c27-41e5-b792-6df4cb346a1d","Type":"ContainerDied","Data":"3a0d60dfede8e84b308c2b30589ffc9daf1fdc0746d48e041e2231dcf7a6b3f7"} Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.281403 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.470488 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-ssh-key-openstack-edpm-ipam\") pod \"1861fab3-9c27-41e5-b792-6df4cb346a1d\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.470614 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p4nf\" (UniqueName: \"kubernetes.io/projected/1861fab3-9c27-41e5-b792-6df4cb346a1d-kube-api-access-7p4nf\") pod \"1861fab3-9c27-41e5-b792-6df4cb346a1d\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.470640 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-inventory\") pod \"1861fab3-9c27-41e5-b792-6df4cb346a1d\" (UID: \"1861fab3-9c27-41e5-b792-6df4cb346a1d\") " Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.477762 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1861fab3-9c27-41e5-b792-6df4cb346a1d-kube-api-access-7p4nf" (OuterVolumeSpecName: "kube-api-access-7p4nf") pod "1861fab3-9c27-41e5-b792-6df4cb346a1d" (UID: "1861fab3-9c27-41e5-b792-6df4cb346a1d"). InnerVolumeSpecName "kube-api-access-7p4nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.502785 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-inventory" (OuterVolumeSpecName: "inventory") pod "1861fab3-9c27-41e5-b792-6df4cb346a1d" (UID: "1861fab3-9c27-41e5-b792-6df4cb346a1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.508349 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1861fab3-9c27-41e5-b792-6df4cb346a1d" (UID: "1861fab3-9c27-41e5-b792-6df4cb346a1d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.572770 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.572836 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p4nf\" (UniqueName: \"kubernetes.io/projected/1861fab3-9c27-41e5-b792-6df4cb346a1d-kube-api-access-7p4nf\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.572847 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1861fab3-9c27-41e5-b792-6df4cb346a1d-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.825779 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" event={"ID":"1861fab3-9c27-41e5-b792-6df4cb346a1d","Type":"ContainerDied","Data":"631e012a8bfa2053237e5d2e0b2b0b425abe5464d252ff64239e651da3132b81"} Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.825828 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="631e012a8bfa2053237e5d2e0b2b0b425abe5464d252ff64239e651da3132b81" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.825839 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4hnbg" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.907762 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq"] Jan 29 09:30:34 crc kubenswrapper[4771]: E0129 09:30:34.908134 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1861fab3-9c27-41e5-b792-6df4cb346a1d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.908157 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1861fab3-9c27-41e5-b792-6df4cb346a1d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.908405 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1861fab3-9c27-41e5-b792-6df4cb346a1d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.909091 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.911107 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.911132 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.912722 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.912723 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:30:34 crc kubenswrapper[4771]: I0129 09:30:34.919331 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq"] Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.082387 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.083125 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.083329 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.083481 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz868\" (UniqueName: \"kubernetes.io/projected/d2af364a-dc24-46dc-bd14-8ad420af1812-kube-api-access-vz868\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.185182 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.185274 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz868\" (UniqueName: \"kubernetes.io/projected/d2af364a-dc24-46dc-bd14-8ad420af1812-kube-api-access-vz868\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.185340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.185366 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.190051 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.190188 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.207035 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.221459 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz868\" (UniqueName: \"kubernetes.io/projected/d2af364a-dc24-46dc-bd14-8ad420af1812-kube-api-access-vz868\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.224966 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:30:35 crc kubenswrapper[4771]: W0129 09:30:35.849982 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2af364a_dc24_46dc_bd14_8ad420af1812.slice/crio-79c0950ef86ebed49c7b1194edf09b9268437d282e07ed4ee4722b655870fa7d WatchSource:0}: Error finding container 79c0950ef86ebed49c7b1194edf09b9268437d282e07ed4ee4722b655870fa7d: Status 404 returned error can't find the container with id 79c0950ef86ebed49c7b1194edf09b9268437d282e07ed4ee4722b655870fa7d Jan 29 09:30:35 crc kubenswrapper[4771]: I0129 09:30:35.856607 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq"] Jan 29 09:30:36 crc kubenswrapper[4771]: I0129 09:30:36.257226 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:30:36 crc kubenswrapper[4771]: I0129 09:30:36.331121 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:30:36 crc kubenswrapper[4771]: I0129 09:30:36.523388 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65299"] Jan 29 09:30:36 crc kubenswrapper[4771]: I0129 09:30:36.854823 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" event={"ID":"d2af364a-dc24-46dc-bd14-8ad420af1812","Type":"ContainerStarted","Data":"79c0950ef86ebed49c7b1194edf09b9268437d282e07ed4ee4722b655870fa7d"} Jan 29 09:30:37 crc kubenswrapper[4771]: I0129 09:30:37.866020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" event={"ID":"d2af364a-dc24-46dc-bd14-8ad420af1812","Type":"ContainerStarted","Data":"c1933524ac11a97b2c141068de1940dbaf30d0935d5d9417e28c7d8d49d3d1f8"} Jan 29 09:30:37 crc kubenswrapper[4771]: I0129 09:30:37.866447 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-65299" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="registry-server" containerID="cri-o://f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d" gracePeriod=2 Jan 29 09:30:37 crc kubenswrapper[4771]: I0129 09:30:37.902748 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" podStartSLOduration=2.9638792130000002 podStartE2EDuration="3.902728665s" podCreationTimestamp="2026-01-29 09:30:34 +0000 UTC" firstStartedPulling="2026-01-29 09:30:35.852736735 +0000 UTC m=+1455.975576962" lastFinishedPulling="2026-01-29 09:30:36.791586187 +0000 UTC m=+1456.914426414" observedRunningTime="2026-01-29 09:30:37.893814712 +0000 UTC m=+1458.016654939" watchObservedRunningTime="2026-01-29 09:30:37.902728665 +0000 UTC m=+1458.025568892" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.366777 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.497046 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-utilities\") pod \"668d1e64-b62d-41a4-93c3-a43725f80a0f\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.497184 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm8kh\" (UniqueName: \"kubernetes.io/projected/668d1e64-b62d-41a4-93c3-a43725f80a0f-kube-api-access-xm8kh\") pod \"668d1e64-b62d-41a4-93c3-a43725f80a0f\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.497239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-catalog-content\") pod \"668d1e64-b62d-41a4-93c3-a43725f80a0f\" (UID: \"668d1e64-b62d-41a4-93c3-a43725f80a0f\") " Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.497775 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-utilities" (OuterVolumeSpecName: "utilities") pod "668d1e64-b62d-41a4-93c3-a43725f80a0f" (UID: "668d1e64-b62d-41a4-93c3-a43725f80a0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.517076 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668d1e64-b62d-41a4-93c3-a43725f80a0f-kube-api-access-xm8kh" (OuterVolumeSpecName: "kube-api-access-xm8kh") pod "668d1e64-b62d-41a4-93c3-a43725f80a0f" (UID: "668d1e64-b62d-41a4-93c3-a43725f80a0f"). InnerVolumeSpecName "kube-api-access-xm8kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.599481 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.599755 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm8kh\" (UniqueName: \"kubernetes.io/projected/668d1e64-b62d-41a4-93c3-a43725f80a0f-kube-api-access-xm8kh\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.606846 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "668d1e64-b62d-41a4-93c3-a43725f80a0f" (UID: "668d1e64-b62d-41a4-93c3-a43725f80a0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.701490 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668d1e64-b62d-41a4-93c3-a43725f80a0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.889062 4771 generic.go:334] "Generic (PLEG): container finished" podID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerID="f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d" exitCode=0 Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.889137 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65299" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.889138 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65299" event={"ID":"668d1e64-b62d-41a4-93c3-a43725f80a0f","Type":"ContainerDied","Data":"f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d"} Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.889200 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65299" event={"ID":"668d1e64-b62d-41a4-93c3-a43725f80a0f","Type":"ContainerDied","Data":"e8fa4663106d1aa323f81597199f1e477688dbeedd8b4ea1259c98cff0356e18"} Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.889218 4771 scope.go:117] "RemoveContainer" containerID="f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.916957 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65299"] Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.924794 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-65299"] Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.934804 4771 scope.go:117] "RemoveContainer" containerID="6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6" Jan 29 09:30:38 crc kubenswrapper[4771]: I0129 09:30:38.969222 4771 scope.go:117] "RemoveContainer" containerID="04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02" Jan 29 09:30:39 crc kubenswrapper[4771]: I0129 09:30:39.000973 4771 scope.go:117] "RemoveContainer" containerID="f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d" Jan 29 09:30:39 crc kubenswrapper[4771]: E0129 09:30:39.001562 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d\": container with ID starting with f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d not found: ID does not exist" containerID="f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d" Jan 29 09:30:39 crc kubenswrapper[4771]: I0129 09:30:39.001862 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d"} err="failed to get container status \"f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d\": rpc error: code = NotFound desc = could not find container \"f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d\": container with ID starting with f66010dda9dc139ed163c4ad2f371a8f296c88045ad3bd63f8155170ad033b1d not found: ID does not exist" Jan 29 09:30:39 crc kubenswrapper[4771]: I0129 09:30:39.001897 4771 scope.go:117] "RemoveContainer" containerID="6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6" Jan 29 09:30:39 crc kubenswrapper[4771]: E0129 09:30:39.002731 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6\": container with ID starting with 6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6 not found: ID does not exist" containerID="6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6" Jan 29 09:30:39 crc kubenswrapper[4771]: I0129 09:30:39.002781 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6"} err="failed to get container status \"6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6\": rpc error: code = NotFound desc = could not find container \"6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6\": container with ID starting with 6f58b350451a580b551ab55317845a0c22dafae64246c66fa3dca214d7ab70b6 not found: ID does not exist" Jan 29 09:30:39 crc kubenswrapper[4771]: I0129 09:30:39.002815 4771 scope.go:117] "RemoveContainer" containerID="04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02" Jan 29 09:30:39 crc kubenswrapper[4771]: E0129 09:30:39.003186 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02\": container with ID starting with 04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02 not found: ID does not exist" containerID="04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02" Jan 29 09:30:39 crc kubenswrapper[4771]: I0129 09:30:39.003218 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02"} err="failed to get container status \"04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02\": rpc error: code = NotFound desc = could not find container \"04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02\": container with ID starting with 04f4653e11ecbe0f7b7741d42ee786f38864afe2bb57b791350b3cf468092f02 not found: ID does not exist" Jan 29 09:30:40 crc kubenswrapper[4771]: I0129 09:30:40.857999 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" path="/var/lib/kubelet/pods/668d1e64-b62d-41a4-93c3-a43725f80a0f/volumes" Jan 29 09:30:42 crc kubenswrapper[4771]: I0129 09:30:42.702995 4771 scope.go:117] "RemoveContainer" containerID="e7301068cd18e70d6e5005a2e25bd0def388e88c9133dccbe13f5813c35e289e" Jan 29 09:30:42 crc kubenswrapper[4771]: I0129 09:30:42.735566 4771 scope.go:117] "RemoveContainer" containerID="c344d6f90b22c61b8e5c1a0e39a83f45b277933c0618a0431a31c5628c9a4a41" Jan 29 09:30:42 crc kubenswrapper[4771]: I0129 09:30:42.769408 4771 scope.go:117] "RemoveContainer" containerID="32b9c6ae277fb825fd3d78d0a81f69d4eca9182de7841afe16a7ddffba255e9d" Jan 29 09:30:42 crc kubenswrapper[4771]: I0129 09:30:42.810083 4771 scope.go:117] "RemoveContainer" containerID="47a503c09f3bdbfcf388b163122cdd820a75aea01f3465ece74aceb555e11f97" Jan 29 09:31:42 crc kubenswrapper[4771]: I0129 09:31:42.937444 4771 scope.go:117] "RemoveContainer" containerID="210faa8939b1d3439762d9c1ade0884e7cddf35d1eccb277509739a4e9a09e15" Jan 29 09:32:14 crc kubenswrapper[4771]: I0129 09:32:14.271077 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:32:14 crc kubenswrapper[4771]: I0129 09:32:14.271670 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.093175 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h9m7m"] Jan 29 09:32:15 crc kubenswrapper[4771]: E0129 09:32:15.093996 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="extract-utilities" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.094018 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="extract-utilities" Jan 29 09:32:15 crc kubenswrapper[4771]: E0129 09:32:15.094031 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="registry-server" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.094039 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="registry-server" Jan 29 09:32:15 crc kubenswrapper[4771]: E0129 09:32:15.094079 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="extract-content" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.094088 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="extract-content" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.094311 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="668d1e64-b62d-41a4-93c3-a43725f80a0f" containerName="registry-server" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.095937 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.118960 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9m7m"] Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.236805 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqmh\" (UniqueName: \"kubernetes.io/projected/39486d99-8865-40f0-b8c0-ab57de272c9f-kube-api-access-rfqmh\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.236904 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-catalog-content\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.237152 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-utilities\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.339817 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqmh\" (UniqueName: \"kubernetes.io/projected/39486d99-8865-40f0-b8c0-ab57de272c9f-kube-api-access-rfqmh\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.340045 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-catalog-content\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.340136 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-utilities\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.340943 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-utilities\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.341218 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-catalog-content\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.374082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqmh\" (UniqueName: \"kubernetes.io/projected/39486d99-8865-40f0-b8c0-ab57de272c9f-kube-api-access-rfqmh\") pod \"certified-operators-h9m7m\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.440364 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.948978 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9m7m"] Jan 29 09:32:15 crc kubenswrapper[4771]: I0129 09:32:15.983214 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9m7m" event={"ID":"39486d99-8865-40f0-b8c0-ab57de272c9f","Type":"ContainerStarted","Data":"87ce9d1c8e3470e1a08970fce06c81788068331b6c44ca777ce84ce7d587e0a4"} Jan 29 09:32:16 crc kubenswrapper[4771]: I0129 09:32:16.993893 4771 generic.go:334] "Generic (PLEG): container finished" podID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerID="ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262" exitCode=0 Jan 29 09:32:16 crc kubenswrapper[4771]: I0129 09:32:16.994005 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9m7m" event={"ID":"39486d99-8865-40f0-b8c0-ab57de272c9f","Type":"ContainerDied","Data":"ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262"} Jan 29 09:32:18 crc kubenswrapper[4771]: I0129 09:32:18.006148 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9m7m" event={"ID":"39486d99-8865-40f0-b8c0-ab57de272c9f","Type":"ContainerStarted","Data":"af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b"} Jan 29 09:32:19 crc kubenswrapper[4771]: I0129 09:32:19.016675 4771 generic.go:334] "Generic (PLEG): container finished" podID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerID="af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b" exitCode=0 Jan 29 09:32:19 crc kubenswrapper[4771]: I0129 09:32:19.016766 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9m7m" event={"ID":"39486d99-8865-40f0-b8c0-ab57de272c9f","Type":"ContainerDied","Data":"af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b"} Jan 29 09:32:20 crc kubenswrapper[4771]: I0129 09:32:20.027117 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9m7m" event={"ID":"39486d99-8865-40f0-b8c0-ab57de272c9f","Type":"ContainerStarted","Data":"1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41"} Jan 29 09:32:20 crc kubenswrapper[4771]: I0129 09:32:20.047578 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h9m7m" podStartSLOduration=2.453083556 podStartE2EDuration="5.047557689s" podCreationTimestamp="2026-01-29 09:32:15 +0000 UTC" firstStartedPulling="2026-01-29 09:32:16.99609817 +0000 UTC m=+1557.118938397" lastFinishedPulling="2026-01-29 09:32:19.590572263 +0000 UTC m=+1559.713412530" observedRunningTime="2026-01-29 09:32:20.044728581 +0000 UTC m=+1560.167568818" watchObservedRunningTime="2026-01-29 09:32:20.047557689 +0000 UTC m=+1560.170397926" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.738348 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dzww"] Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.742990 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.748984 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dzww"] Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.834269 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-utilities\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.834395 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl5xp\" (UniqueName: \"kubernetes.io/projected/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-kube-api-access-rl5xp\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.834444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-catalog-content\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.936804 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl5xp\" (UniqueName: \"kubernetes.io/projected/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-kube-api-access-rl5xp\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.936883 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-catalog-content\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.936989 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-utilities\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.937576 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-utilities\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.937757 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-catalog-content\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:22 crc kubenswrapper[4771]: I0129 09:32:22.959158 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl5xp\" (UniqueName: \"kubernetes.io/projected/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-kube-api-access-rl5xp\") pod \"community-operators-7dzww\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:23 crc kubenswrapper[4771]: I0129 09:32:23.069687 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:23 crc kubenswrapper[4771]: W0129 09:32:23.620538 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4a700b0_dd32_4cd7_9d7a_7a762e7ef272.slice/crio-6f8a9e4e4d656066f82a6ea686487a68682bceb30909ee2a55501cb79bc9e26e WatchSource:0}: Error finding container 6f8a9e4e4d656066f82a6ea686487a68682bceb30909ee2a55501cb79bc9e26e: Status 404 returned error can't find the container with id 6f8a9e4e4d656066f82a6ea686487a68682bceb30909ee2a55501cb79bc9e26e Jan 29 09:32:23 crc kubenswrapper[4771]: I0129 09:32:23.625224 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dzww"] Jan 29 09:32:24 crc kubenswrapper[4771]: I0129 09:32:24.087927 4771 generic.go:334] "Generic (PLEG): container finished" podID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerID="2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f" exitCode=0 Jan 29 09:32:24 crc kubenswrapper[4771]: I0129 09:32:24.087996 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dzww" event={"ID":"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272","Type":"ContainerDied","Data":"2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f"} Jan 29 09:32:24 crc kubenswrapper[4771]: I0129 09:32:24.088029 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dzww" event={"ID":"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272","Type":"ContainerStarted","Data":"6f8a9e4e4d656066f82a6ea686487a68682bceb30909ee2a55501cb79bc9e26e"} Jan 29 09:32:25 crc kubenswrapper[4771]: I0129 09:32:25.098529 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dzww" event={"ID":"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272","Type":"ContainerStarted","Data":"e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4"} Jan 29 09:32:25 crc kubenswrapper[4771]: I0129 09:32:25.441356 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:25 crc kubenswrapper[4771]: I0129 09:32:25.441469 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:25 crc kubenswrapper[4771]: I0129 09:32:25.531999 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:26 crc kubenswrapper[4771]: I0129 09:32:26.113459 4771 generic.go:334] "Generic (PLEG): container finished" podID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerID="e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4" exitCode=0 Jan 29 09:32:26 crc kubenswrapper[4771]: I0129 09:32:26.113746 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dzww" event={"ID":"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272","Type":"ContainerDied","Data":"e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4"} Jan 29 09:32:26 crc kubenswrapper[4771]: I0129 09:32:26.174252 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:27 crc kubenswrapper[4771]: I0129 09:32:27.124900 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dzww" event={"ID":"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272","Type":"ContainerStarted","Data":"c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea"} Jan 29 09:32:27 crc kubenswrapper[4771]: I0129 09:32:27.163578 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dzww" podStartSLOduration=2.667064378 podStartE2EDuration="5.16354493s" podCreationTimestamp="2026-01-29 09:32:22 +0000 UTC" firstStartedPulling="2026-01-29 09:32:24.090648307 +0000 UTC m=+1564.213488554" lastFinishedPulling="2026-01-29 09:32:26.587128839 +0000 UTC m=+1566.709969106" observedRunningTime="2026-01-29 09:32:27.149740464 +0000 UTC m=+1567.272580711" watchObservedRunningTime="2026-01-29 09:32:27.16354493 +0000 UTC m=+1567.286385207" Jan 29 09:32:27 crc kubenswrapper[4771]: I0129 09:32:27.845974 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9m7m"] Jan 29 09:32:28 crc kubenswrapper[4771]: I0129 09:32:28.133139 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h9m7m" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerName="registry-server" containerID="cri-o://1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41" gracePeriod=2 Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.118369 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.149915 4771 generic.go:334] "Generic (PLEG): container finished" podID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerID="1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41" exitCode=0 Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.149962 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9m7m" event={"ID":"39486d99-8865-40f0-b8c0-ab57de272c9f","Type":"ContainerDied","Data":"1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41"} Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.149987 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9m7m" event={"ID":"39486d99-8865-40f0-b8c0-ab57de272c9f","Type":"ContainerDied","Data":"87ce9d1c8e3470e1a08970fce06c81788068331b6c44ca777ce84ce7d587e0a4"} Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.150003 4771 scope.go:117] "RemoveContainer" containerID="1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.150123 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9m7m" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.174988 4771 scope.go:117] "RemoveContainer" containerID="af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.204994 4771 scope.go:117] "RemoveContainer" containerID="ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.244545 4771 scope.go:117] "RemoveContainer" containerID="1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41" Jan 29 09:32:29 crc kubenswrapper[4771]: E0129 09:32:29.245110 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41\": container with ID starting with 1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41 not found: ID does not exist" containerID="1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.245146 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41"} err="failed to get container status \"1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41\": rpc error: code = NotFound desc = could not find container \"1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41\": container with ID starting with 1fb97297fc73347abfbb0c11eeadc3af0aeaa7a2b98e0174d141d98ed0d86f41 not found: ID does not exist" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.245174 4771 scope.go:117] "RemoveContainer" containerID="af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b" Jan 29 09:32:29 crc kubenswrapper[4771]: E0129 09:32:29.245549 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b\": container with ID starting with af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b not found: ID does not exist" containerID="af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.245576 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b"} err="failed to get container status \"af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b\": rpc error: code = NotFound desc = could not find container \"af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b\": container with ID starting with af0f95131fca07d19e1d4c180229a045ac6774a8c677d11bb578d31c511da97b not found: ID does not exist" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.245594 4771 scope.go:117] "RemoveContainer" containerID="ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262" Jan 29 09:32:29 crc kubenswrapper[4771]: E0129 09:32:29.245988 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262\": container with ID starting with ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262 not found: ID does not exist" containerID="ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.246016 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262"} err="failed to get container status \"ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262\": rpc error: code = NotFound desc = could not find container \"ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262\": container with ID starting with ffb8c7df9fb0c36eb7647c0b47348a59b21ff12e10aa0300323c26d0b2fff262 not found: ID does not exist" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.265534 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqmh\" (UniqueName: \"kubernetes.io/projected/39486d99-8865-40f0-b8c0-ab57de272c9f-kube-api-access-rfqmh\") pod \"39486d99-8865-40f0-b8c0-ab57de272c9f\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.265751 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-catalog-content\") pod \"39486d99-8865-40f0-b8c0-ab57de272c9f\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.265929 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-utilities\") pod \"39486d99-8865-40f0-b8c0-ab57de272c9f\" (UID: \"39486d99-8865-40f0-b8c0-ab57de272c9f\") " Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.268155 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-utilities" (OuterVolumeSpecName: "utilities") pod "39486d99-8865-40f0-b8c0-ab57de272c9f" (UID: "39486d99-8865-40f0-b8c0-ab57de272c9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.274667 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39486d99-8865-40f0-b8c0-ab57de272c9f-kube-api-access-rfqmh" (OuterVolumeSpecName: "kube-api-access-rfqmh") pod "39486d99-8865-40f0-b8c0-ab57de272c9f" (UID: "39486d99-8865-40f0-b8c0-ab57de272c9f"). InnerVolumeSpecName "kube-api-access-rfqmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.312637 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39486d99-8865-40f0-b8c0-ab57de272c9f" (UID: "39486d99-8865-40f0-b8c0-ab57de272c9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.368350 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.368386 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqmh\" (UniqueName: \"kubernetes.io/projected/39486d99-8865-40f0-b8c0-ab57de272c9f-kube-api-access-rfqmh\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.368397 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39486d99-8865-40f0-b8c0-ab57de272c9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.493249 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9m7m"] Jan 29 09:32:29 crc kubenswrapper[4771]: I0129 09:32:29.501573 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h9m7m"] Jan 29 09:32:29 crc kubenswrapper[4771]: E0129 09:32:29.682806 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39486d99_8865_40f0_b8c0_ab57de272c9f.slice/crio-87ce9d1c8e3470e1a08970fce06c81788068331b6c44ca777ce84ce7d587e0a4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39486d99_8865_40f0_b8c0_ab57de272c9f.slice\": RecentStats: unable to find data in memory cache]" Jan 29 09:32:30 crc kubenswrapper[4771]: I0129 09:32:30.859031 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" path="/var/lib/kubelet/pods/39486d99-8865-40f0-b8c0-ab57de272c9f/volumes" Jan 29 09:32:33 crc kubenswrapper[4771]: I0129 09:32:33.070859 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:33 crc kubenswrapper[4771]: I0129 09:32:33.072146 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:33 crc kubenswrapper[4771]: I0129 09:32:33.133909 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:33 crc kubenswrapper[4771]: I0129 09:32:33.246066 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:33 crc kubenswrapper[4771]: I0129 09:32:33.364412 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dzww"] Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.216817 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7dzww" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerName="registry-server" containerID="cri-o://c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea" gracePeriod=2 Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.659287 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.791601 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl5xp\" (UniqueName: \"kubernetes.io/projected/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-kube-api-access-rl5xp\") pod \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.791755 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-catalog-content\") pod \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.791811 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-utilities\") pod \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\" (UID: \"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272\") " Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.792811 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-utilities" (OuterVolumeSpecName: "utilities") pod "e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" (UID: "e4a700b0-dd32-4cd7-9d7a-7a762e7ef272"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.798263 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-kube-api-access-rl5xp" (OuterVolumeSpecName: "kube-api-access-rl5xp") pod "e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" (UID: "e4a700b0-dd32-4cd7-9d7a-7a762e7ef272"). InnerVolumeSpecName "kube-api-access-rl5xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.895173 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl5xp\" (UniqueName: \"kubernetes.io/projected/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-kube-api-access-rl5xp\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:35 crc kubenswrapper[4771]: I0129 09:32:35.896076 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.212603 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" (UID: "e4a700b0-dd32-4cd7-9d7a-7a762e7ef272"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.228985 4771 generic.go:334] "Generic (PLEG): container finished" podID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerID="c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea" exitCode=0 Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.229044 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dzww" event={"ID":"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272","Type":"ContainerDied","Data":"c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea"} Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.229128 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dzww" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.229154 4771 scope.go:117] "RemoveContainer" containerID="c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.229132 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dzww" event={"ID":"e4a700b0-dd32-4cd7-9d7a-7a762e7ef272","Type":"ContainerDied","Data":"6f8a9e4e4d656066f82a6ea686487a68682bceb30909ee2a55501cb79bc9e26e"} Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.274198 4771 scope.go:117] "RemoveContainer" containerID="e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.276413 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dzww"] Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.287483 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7dzww"] Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.304338 4771 scope.go:117] "RemoveContainer" containerID="2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.307274 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.350050 4771 scope.go:117] "RemoveContainer" containerID="c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea" Jan 29 09:32:36 crc kubenswrapper[4771]: E0129 09:32:36.350642 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea\": container with ID starting with c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea not found: ID does not exist" containerID="c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.350678 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea"} err="failed to get container status \"c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea\": rpc error: code = NotFound desc = could not find container \"c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea\": container with ID starting with c62129b8d72ae07ae2337d6fa6d94e9652614774aa8aa11e1a07e572af87c7ea not found: ID does not exist" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.350721 4771 scope.go:117] "RemoveContainer" containerID="e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4" Jan 29 09:32:36 crc kubenswrapper[4771]: E0129 09:32:36.351300 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4\": container with ID starting with e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4 not found: ID does not exist" containerID="e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.351493 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4"} err="failed to get container status \"e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4\": rpc error: code = NotFound desc = could not find container \"e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4\": container with ID starting with e12b0f424c022520d38bd2a1f8dc3d726bb225271729833458acb15f38e249e4 not found: ID does not exist" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.351571 4771 scope.go:117] "RemoveContainer" containerID="2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f" Jan 29 09:32:36 crc kubenswrapper[4771]: E0129 09:32:36.352029 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f\": container with ID starting with 2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f not found: ID does not exist" containerID="2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.352061 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f"} err="failed to get container status \"2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f\": rpc error: code = NotFound desc = could not find container \"2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f\": container with ID starting with 2367da8efedc7fd5c5a77276e23bd4c799b62e9230e058755a156419cb0fc49f not found: ID does not exist" Jan 29 09:32:36 crc kubenswrapper[4771]: I0129 09:32:36.851230 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" path="/var/lib/kubelet/pods/e4a700b0-dd32-4cd7-9d7a-7a762e7ef272/volumes" Jan 29 09:32:43 crc kubenswrapper[4771]: I0129 09:32:43.043800 4771 scope.go:117] "RemoveContainer" containerID="805baec997d193d1a69880cb2f55ce468a4220eb8e8f87b5d47df294c65a5984" Jan 29 09:32:43 crc kubenswrapper[4771]: I0129 09:32:43.087170 4771 scope.go:117] "RemoveContainer" containerID="4b593668ee323d77a62a50829c845533b664528d4d107125468161cb01b9ddbb" Jan 29 09:32:43 crc kubenswrapper[4771]: I0129 09:32:43.116172 4771 scope.go:117] "RemoveContainer" containerID="1a4843cba8c89c39cb2f252de5fcc3c6837c8cc5a16eaf14d9a4009a4b8afa76" Jan 29 09:32:43 crc kubenswrapper[4771]: I0129 09:32:43.140936 4771 scope.go:117] "RemoveContainer" containerID="3e2557f09c1a7adaa32ed62d19fdb33d3027f5be10e0aa520d775be2cded6f13" Jan 29 09:32:43 crc kubenswrapper[4771]: I0129 09:32:43.169218 4771 scope.go:117] "RemoveContainer" containerID="28e105f96cdaa921bec35cb15b5b7737cc1cf5138d96eb3bef419b1a254e28bb" Jan 29 09:32:43 crc kubenswrapper[4771]: I0129 09:32:43.188920 4771 scope.go:117] "RemoveContainer" containerID="eef4ef30a3353403062f11e8744ad1b0c7bb15d1965e6ee9bb01c9a7590d6867" Jan 29 09:32:44 crc kubenswrapper[4771]: I0129 09:32:44.272026 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:32:44 crc kubenswrapper[4771]: I0129 09:32:44.272111 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.272123 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.273112 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.273189 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.274208 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.274314 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" gracePeriod=600 Jan 29 09:33:14 crc kubenswrapper[4771]: E0129 09:33:14.395376 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542268 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qb4ct"] Jan 29 09:33:14 crc kubenswrapper[4771]: E0129 09:33:14.542654 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerName="extract-content" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542670 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerName="extract-content" Jan 29 09:33:14 crc kubenswrapper[4771]: E0129 09:33:14.542681 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerName="registry-server" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542687 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerName="registry-server" Jan 29 09:33:14 crc kubenswrapper[4771]: E0129 09:33:14.542735 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerName="registry-server" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542743 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerName="registry-server" Jan 29 09:33:14 crc kubenswrapper[4771]: E0129 09:33:14.542756 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerName="extract-content" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542763 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerName="extract-content" Jan 29 09:33:14 crc kubenswrapper[4771]: E0129 09:33:14.542774 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerName="extract-utilities" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542780 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerName="extract-utilities" Jan 29 09:33:14 crc kubenswrapper[4771]: E0129 09:33:14.542795 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerName="extract-utilities" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542801 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerName="extract-utilities" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542975 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a700b0-dd32-4cd7-9d7a-7a762e7ef272" containerName="registry-server" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.542998 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="39486d99-8865-40f0-b8c0-ab57de272c9f" containerName="registry-server" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.544461 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.558794 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb4ct"] Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.578544 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kn4\" (UniqueName: \"kubernetes.io/projected/49b53799-1415-41f8-89ee-ff78fe6471c0-kube-api-access-b7kn4\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.578599 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-utilities\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.578839 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-catalog-content\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.626887 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" exitCode=0 Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.626939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4"} Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.626978 4771 scope.go:117] "RemoveContainer" containerID="182b687cc8105c26ca625b4fc83c6757431c23f77bbdc17b6ccd0dabae3f7a24" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.627679 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:33:14 crc kubenswrapper[4771]: E0129 09:33:14.628026 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.684158 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-catalog-content\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.684253 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7kn4\" (UniqueName: \"kubernetes.io/projected/49b53799-1415-41f8-89ee-ff78fe6471c0-kube-api-access-b7kn4\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.684274 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-utilities\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.684749 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-utilities\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.684816 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-catalog-content\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.716474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7kn4\" (UniqueName: \"kubernetes.io/projected/49b53799-1415-41f8-89ee-ff78fe6471c0-kube-api-access-b7kn4\") pod \"redhat-marketplace-qb4ct\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:14 crc kubenswrapper[4771]: I0129 09:33:14.873850 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:15 crc kubenswrapper[4771]: I0129 09:33:15.424269 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb4ct"] Jan 29 09:33:15 crc kubenswrapper[4771]: I0129 09:33:15.643925 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb4ct" event={"ID":"49b53799-1415-41f8-89ee-ff78fe6471c0","Type":"ContainerStarted","Data":"1b42520d70e1ef7791f82c0c3609d120ab4e5abd3ff04ce19152bee8baa9247d"} Jan 29 09:33:16 crc kubenswrapper[4771]: I0129 09:33:16.654172 4771 generic.go:334] "Generic (PLEG): container finished" podID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerID="3977ba9b556a7bb9bf02fd50498e6f2ef68dee87e04fcd8351eff959b08fed3c" exitCode=0 Jan 29 09:33:16 crc kubenswrapper[4771]: I0129 09:33:16.654223 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb4ct" event={"ID":"49b53799-1415-41f8-89ee-ff78fe6471c0","Type":"ContainerDied","Data":"3977ba9b556a7bb9bf02fd50498e6f2ef68dee87e04fcd8351eff959b08fed3c"} Jan 29 09:33:16 crc kubenswrapper[4771]: I0129 09:33:16.656739 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:33:18 crc kubenswrapper[4771]: I0129 09:33:18.676188 4771 generic.go:334] "Generic (PLEG): container finished" podID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerID="027301ee4b908d383c97b90cb84e6525ae31aee7987c0f7ceae48dd25e7a8f7d" exitCode=0 Jan 29 09:33:18 crc kubenswrapper[4771]: I0129 09:33:18.676443 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb4ct" event={"ID":"49b53799-1415-41f8-89ee-ff78fe6471c0","Type":"ContainerDied","Data":"027301ee4b908d383c97b90cb84e6525ae31aee7987c0f7ceae48dd25e7a8f7d"} Jan 29 09:33:19 crc kubenswrapper[4771]: I0129 09:33:19.688677 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb4ct" event={"ID":"49b53799-1415-41f8-89ee-ff78fe6471c0","Type":"ContainerStarted","Data":"0d0843d6350b16602fdd5a703f23e156304278cdebbcc5b0c8ac81a153ef27ba"} Jan 29 09:33:19 crc kubenswrapper[4771]: I0129 09:33:19.720887 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qb4ct" podStartSLOduration=3.324159263 podStartE2EDuration="5.720850814s" podCreationTimestamp="2026-01-29 09:33:14 +0000 UTC" firstStartedPulling="2026-01-29 09:33:16.656357319 +0000 UTC m=+1616.779197556" lastFinishedPulling="2026-01-29 09:33:19.05304888 +0000 UTC m=+1619.175889107" observedRunningTime="2026-01-29 09:33:19.709394322 +0000 UTC m=+1619.832234549" watchObservedRunningTime="2026-01-29 09:33:19.720850814 +0000 UTC m=+1619.843691031" Jan 29 09:33:24 crc kubenswrapper[4771]: I0129 09:33:24.874679 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:24 crc kubenswrapper[4771]: I0129 09:33:24.874962 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:24 crc kubenswrapper[4771]: I0129 09:33:24.918586 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:25 crc kubenswrapper[4771]: I0129 09:33:25.787459 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:25 crc kubenswrapper[4771]: I0129 09:33:25.838554 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb4ct"] Jan 29 09:33:26 crc kubenswrapper[4771]: I0129 09:33:26.838642 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:33:26 crc kubenswrapper[4771]: E0129 09:33:26.839062 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:33:27 crc kubenswrapper[4771]: I0129 09:33:27.760045 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qb4ct" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerName="registry-server" containerID="cri-o://0d0843d6350b16602fdd5a703f23e156304278cdebbcc5b0c8ac81a153ef27ba" gracePeriod=2 Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.772796 4771 generic.go:334] "Generic (PLEG): container finished" podID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerID="0d0843d6350b16602fdd5a703f23e156304278cdebbcc5b0c8ac81a153ef27ba" exitCode=0 Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.772898 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb4ct" event={"ID":"49b53799-1415-41f8-89ee-ff78fe6471c0","Type":"ContainerDied","Data":"0d0843d6350b16602fdd5a703f23e156304278cdebbcc5b0c8ac81a153ef27ba"} Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.773095 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb4ct" event={"ID":"49b53799-1415-41f8-89ee-ff78fe6471c0","Type":"ContainerDied","Data":"1b42520d70e1ef7791f82c0c3609d120ab4e5abd3ff04ce19152bee8baa9247d"} Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.773112 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b42520d70e1ef7791f82c0c3609d120ab4e5abd3ff04ce19152bee8baa9247d" Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.787188 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.921182 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7kn4\" (UniqueName: \"kubernetes.io/projected/49b53799-1415-41f8-89ee-ff78fe6471c0-kube-api-access-b7kn4\") pod \"49b53799-1415-41f8-89ee-ff78fe6471c0\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.921270 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-utilities\") pod \"49b53799-1415-41f8-89ee-ff78fe6471c0\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.921372 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-catalog-content\") pod \"49b53799-1415-41f8-89ee-ff78fe6471c0\" (UID: \"49b53799-1415-41f8-89ee-ff78fe6471c0\") " Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.922160 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-utilities" (OuterVolumeSpecName: "utilities") pod "49b53799-1415-41f8-89ee-ff78fe6471c0" (UID: "49b53799-1415-41f8-89ee-ff78fe6471c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.926714 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b53799-1415-41f8-89ee-ff78fe6471c0-kube-api-access-b7kn4" (OuterVolumeSpecName: "kube-api-access-b7kn4") pod "49b53799-1415-41f8-89ee-ff78fe6471c0" (UID: "49b53799-1415-41f8-89ee-ff78fe6471c0"). InnerVolumeSpecName "kube-api-access-b7kn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:33:28 crc kubenswrapper[4771]: I0129 09:33:28.948767 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49b53799-1415-41f8-89ee-ff78fe6471c0" (UID: "49b53799-1415-41f8-89ee-ff78fe6471c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:33:29 crc kubenswrapper[4771]: I0129 09:33:29.023888 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7kn4\" (UniqueName: \"kubernetes.io/projected/49b53799-1415-41f8-89ee-ff78fe6471c0-kube-api-access-b7kn4\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:29 crc kubenswrapper[4771]: I0129 09:33:29.023918 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:29 crc kubenswrapper[4771]: I0129 09:33:29.023927 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49b53799-1415-41f8-89ee-ff78fe6471c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:29 crc kubenswrapper[4771]: I0129 09:33:29.779777 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb4ct" Jan 29 09:33:29 crc kubenswrapper[4771]: I0129 09:33:29.818523 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb4ct"] Jan 29 09:33:29 crc kubenswrapper[4771]: I0129 09:33:29.827748 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb4ct"] Jan 29 09:33:30 crc kubenswrapper[4771]: I0129 09:33:30.848674 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" path="/var/lib/kubelet/pods/49b53799-1415-41f8-89ee-ff78fe6471c0/volumes" Jan 29 09:33:32 crc kubenswrapper[4771]: I0129 09:33:32.825095 4771 generic.go:334] "Generic (PLEG): container finished" podID="d2af364a-dc24-46dc-bd14-8ad420af1812" containerID="c1933524ac11a97b2c141068de1940dbaf30d0935d5d9417e28c7d8d49d3d1f8" exitCode=0 Jan 29 09:33:32 crc kubenswrapper[4771]: I0129 09:33:32.825337 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" event={"ID":"d2af364a-dc24-46dc-bd14-8ad420af1812","Type":"ContainerDied","Data":"c1933524ac11a97b2c141068de1940dbaf30d0935d5d9417e28c7d8d49d3d1f8"} Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.301097 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.428606 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-ssh-key-openstack-edpm-ipam\") pod \"d2af364a-dc24-46dc-bd14-8ad420af1812\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.428731 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-bootstrap-combined-ca-bundle\") pod \"d2af364a-dc24-46dc-bd14-8ad420af1812\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.428843 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz868\" (UniqueName: \"kubernetes.io/projected/d2af364a-dc24-46dc-bd14-8ad420af1812-kube-api-access-vz868\") pod \"d2af364a-dc24-46dc-bd14-8ad420af1812\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.428889 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-inventory\") pod \"d2af364a-dc24-46dc-bd14-8ad420af1812\" (UID: \"d2af364a-dc24-46dc-bd14-8ad420af1812\") " Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.435492 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2af364a-dc24-46dc-bd14-8ad420af1812-kube-api-access-vz868" (OuterVolumeSpecName: "kube-api-access-vz868") pod "d2af364a-dc24-46dc-bd14-8ad420af1812" (UID: "d2af364a-dc24-46dc-bd14-8ad420af1812"). InnerVolumeSpecName "kube-api-access-vz868". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.437870 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d2af364a-dc24-46dc-bd14-8ad420af1812" (UID: "d2af364a-dc24-46dc-bd14-8ad420af1812"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.457621 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d2af364a-dc24-46dc-bd14-8ad420af1812" (UID: "d2af364a-dc24-46dc-bd14-8ad420af1812"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.457938 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-inventory" (OuterVolumeSpecName: "inventory") pod "d2af364a-dc24-46dc-bd14-8ad420af1812" (UID: "d2af364a-dc24-46dc-bd14-8ad420af1812"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.534067 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.534150 4771 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.534186 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz868\" (UniqueName: \"kubernetes.io/projected/d2af364a-dc24-46dc-bd14-8ad420af1812-kube-api-access-vz868\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.534196 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2af364a-dc24-46dc-bd14-8ad420af1812-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.849517 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" event={"ID":"d2af364a-dc24-46dc-bd14-8ad420af1812","Type":"ContainerDied","Data":"79c0950ef86ebed49c7b1194edf09b9268437d282e07ed4ee4722b655870fa7d"} Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.849570 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c0950ef86ebed49c7b1194edf09b9268437d282e07ed4ee4722b655870fa7d" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.849623 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.964424 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl"] Jan 29 09:33:34 crc kubenswrapper[4771]: E0129 09:33:34.964921 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerName="registry-server" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.964938 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerName="registry-server" Jan 29 09:33:34 crc kubenswrapper[4771]: E0129 09:33:34.964952 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerName="extract-content" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.964958 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerName="extract-content" Jan 29 09:33:34 crc kubenswrapper[4771]: E0129 09:33:34.964969 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af364a-dc24-46dc-bd14-8ad420af1812" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.964977 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af364a-dc24-46dc-bd14-8ad420af1812" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:33:34 crc kubenswrapper[4771]: E0129 09:33:34.965000 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerName="extract-utilities" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.965007 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerName="extract-utilities" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.965199 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b53799-1415-41f8-89ee-ff78fe6471c0" containerName="registry-server" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.965217 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af364a-dc24-46dc-bd14-8ad420af1812" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.965972 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.972633 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl"] Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.998105 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.998154 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.998375 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:33:34 crc kubenswrapper[4771]: I0129 09:33:34.998587 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.041757 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.041850 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fpwd\" (UniqueName: \"kubernetes.io/projected/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-kube-api-access-8fpwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.041900 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.144138 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fpwd\" (UniqueName: \"kubernetes.io/projected/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-kube-api-access-8fpwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.145211 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.146059 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.150428 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.154082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.160339 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fpwd\" (UniqueName: \"kubernetes.io/projected/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-kube-api-access-8fpwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.322120 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:33:35 crc kubenswrapper[4771]: W0129 09:33:35.901955 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c4903c9_2b1b_4e50_9e27_49c4aa41974c.slice/crio-d93228a94079b54edc68388af2d0659775aee9ebe192d78562a0ddec2c0a4544 WatchSource:0}: Error finding container d93228a94079b54edc68388af2d0659775aee9ebe192d78562a0ddec2c0a4544: Status 404 returned error can't find the container with id d93228a94079b54edc68388af2d0659775aee9ebe192d78562a0ddec2c0a4544 Jan 29 09:33:35 crc kubenswrapper[4771]: I0129 09:33:35.905801 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl"] Jan 29 09:33:36 crc kubenswrapper[4771]: I0129 09:33:36.871375 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" event={"ID":"8c4903c9-2b1b-4e50-9e27-49c4aa41974c","Type":"ContainerStarted","Data":"69a4b647d64f801d50f06f537757a83b335f3c79ca7c29a2241fc79ac951ae96"} Jan 29 09:33:36 crc kubenswrapper[4771]: I0129 09:33:36.871764 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" event={"ID":"8c4903c9-2b1b-4e50-9e27-49c4aa41974c","Type":"ContainerStarted","Data":"d93228a94079b54edc68388af2d0659775aee9ebe192d78562a0ddec2c0a4544"} Jan 29 09:33:36 crc kubenswrapper[4771]: I0129 09:33:36.884282 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" podStartSLOduration=2.164822165 podStartE2EDuration="2.884264536s" podCreationTimestamp="2026-01-29 09:33:34 +0000 UTC" firstStartedPulling="2026-01-29 09:33:35.906127103 +0000 UTC m=+1636.028967370" lastFinishedPulling="2026-01-29 09:33:36.625569504 +0000 UTC m=+1636.748409741" observedRunningTime="2026-01-29 09:33:36.883522186 +0000 UTC m=+1637.006362433" watchObservedRunningTime="2026-01-29 09:33:36.884264536 +0000 UTC m=+1637.007104763" Jan 29 09:33:38 crc kubenswrapper[4771]: I0129 09:33:38.838586 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:33:38 crc kubenswrapper[4771]: E0129 09:33:38.840235 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:33:53 crc kubenswrapper[4771]: I0129 09:33:53.838009 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:33:53 crc kubenswrapper[4771]: E0129 09:33:53.838854 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:34:05 crc kubenswrapper[4771]: I0129 09:34:05.837542 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:34:05 crc kubenswrapper[4771]: E0129 09:34:05.838229 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:34:06 crc kubenswrapper[4771]: I0129 09:34:06.052130 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-j5t4z"] Jan 29 09:34:06 crc kubenswrapper[4771]: I0129 09:34:06.065010 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5efa-account-create-update-6vnhr"] Jan 29 09:34:06 crc kubenswrapper[4771]: I0129 09:34:06.076930 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-j5t4z"] Jan 29 09:34:06 crc kubenswrapper[4771]: I0129 09:34:06.087567 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5efa-account-create-update-6vnhr"] Jan 29 09:34:06 crc kubenswrapper[4771]: I0129 09:34:06.848368 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152cca8c-e161-488f-b400-ec92a43fd836" path="/var/lib/kubelet/pods/152cca8c-e161-488f-b400-ec92a43fd836/volumes" Jan 29 09:34:06 crc kubenswrapper[4771]: I0129 09:34:06.849715 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e97643-6806-457b-998d-82ddd64ccd99" path="/var/lib/kubelet/pods/f1e97643-6806-457b-998d-82ddd64ccd99/volumes" Jan 29 09:34:11 crc kubenswrapper[4771]: I0129 09:34:11.039200 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ckk6x"] Jan 29 09:34:11 crc kubenswrapper[4771]: I0129 09:34:11.057071 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-b7brh"] Jan 29 09:34:11 crc kubenswrapper[4771]: I0129 09:34:11.079519 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-14ea-account-create-update-89b88"] Jan 29 09:34:11 crc kubenswrapper[4771]: I0129 09:34:11.091718 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-61ed-account-create-update-ndvj5"] Jan 29 09:34:11 crc kubenswrapper[4771]: I0129 09:34:11.100092 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-b7brh"] Jan 29 09:34:11 crc kubenswrapper[4771]: I0129 09:34:11.109779 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-14ea-account-create-update-89b88"] Jan 29 09:34:11 crc kubenswrapper[4771]: I0129 09:34:11.118926 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-61ed-account-create-update-ndvj5"] Jan 29 09:34:11 crc kubenswrapper[4771]: I0129 09:34:11.130930 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ckk6x"] Jan 29 09:34:12 crc kubenswrapper[4771]: I0129 09:34:12.851343 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0d75c6-5a56-4a42-a335-bb7ba669b7f8" path="/var/lib/kubelet/pods/4d0d75c6-5a56-4a42-a335-bb7ba669b7f8/volumes" Jan 29 09:34:12 crc kubenswrapper[4771]: I0129 09:34:12.852205 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="752a764f-52c0-4773-a3fc-8e2a62643f06" path="/var/lib/kubelet/pods/752a764f-52c0-4773-a3fc-8e2a62643f06/volumes" Jan 29 09:34:12 crc kubenswrapper[4771]: I0129 09:34:12.852830 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897e97b1-08ea-4e51-a36f-38727e7eb34e" path="/var/lib/kubelet/pods/897e97b1-08ea-4e51-a36f-38727e7eb34e/volumes" Jan 29 09:34:12 crc kubenswrapper[4771]: I0129 09:34:12.853340 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ec8a48-5800-43a5-ae42-1f2a2309463d" path="/var/lib/kubelet/pods/c8ec8a48-5800-43a5-ae42-1f2a2309463d/volumes" Jan 29 09:34:19 crc kubenswrapper[4771]: I0129 09:34:19.838892 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:34:19 crc kubenswrapper[4771]: E0129 09:34:19.839512 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:34:29 crc kubenswrapper[4771]: I0129 09:34:29.042863 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lsfr8"] Jan 29 09:34:29 crc kubenswrapper[4771]: I0129 09:34:29.054823 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lsfr8"] Jan 29 09:34:30 crc kubenswrapper[4771]: I0129 09:34:30.851652 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3997d1-0e52-4674-b604-10b057141b3a" path="/var/lib/kubelet/pods/0d3997d1-0e52-4674-b604-10b057141b3a/volumes" Jan 29 09:34:33 crc kubenswrapper[4771]: I0129 09:34:33.838547 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:34:33 crc kubenswrapper[4771]: E0129 09:34:33.839206 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:34:43 crc kubenswrapper[4771]: I0129 09:34:43.378531 4771 scope.go:117] "RemoveContainer" containerID="825913871c9299af1d45814e16a534c6023c7c4c599038cc75a8f0035b4a8829" Jan 29 09:34:43 crc kubenswrapper[4771]: I0129 09:34:43.411862 4771 scope.go:117] "RemoveContainer" containerID="62bce91722b6b2610db07fc46f1476a4223ff84f20270fda73f87b4378b18b24" Jan 29 09:34:43 crc kubenswrapper[4771]: I0129 09:34:43.445007 4771 scope.go:117] "RemoveContainer" containerID="b0195e31a697d5a99d9579d22737a1b233d8aaf64df5a0f1cfa3118ca9ac3a62" Jan 29 09:34:43 crc kubenswrapper[4771]: I0129 09:34:43.502029 4771 scope.go:117] "RemoveContainer" containerID="c4ca2f95bf39a490caba3f0bbcbf7fc838e377f4d6bcce3978d353318f561795" Jan 29 09:34:43 crc kubenswrapper[4771]: I0129 09:34:43.535637 4771 scope.go:117] "RemoveContainer" containerID="5f604bace21ae6898615fab85ba57ec70171c13bb83ecff5ab6a670d8cd6e087" Jan 29 09:34:43 crc kubenswrapper[4771]: I0129 09:34:43.576901 4771 scope.go:117] "RemoveContainer" containerID="1bf340c10d80c710bc3952467f5556614618fd58b433b1d283cbcddc16420d1b" Jan 29 09:34:43 crc kubenswrapper[4771]: I0129 09:34:43.626331 4771 scope.go:117] "RemoveContainer" containerID="66fa316576c918d465f02bc66ba86fccdf2935ff17bb14c3491cbf3c2d0321c4" Jan 29 09:34:44 crc kubenswrapper[4771]: I0129 09:34:44.838067 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:34:44 crc kubenswrapper[4771]: E0129 09:34:44.838585 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.041878 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ftr47"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.051912 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-flpb9"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.063877 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-63fb-account-create-update-xskfz"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.071502 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3781-account-create-update-64ds5"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.081020 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ftr47"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.091413 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3781-account-create-update-64ds5"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.101639 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-flpb9"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.110895 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-63fb-account-create-update-xskfz"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.119120 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f3e6-account-create-update-p6nfc"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.130180 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-krx65"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.138104 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f3e6-account-create-update-p6nfc"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.147779 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-krx65"] Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.855312 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a486225-e2df-458e-8db7-5d5bc40e7fe4" path="/var/lib/kubelet/pods/5a486225-e2df-458e-8db7-5d5bc40e7fe4/volumes" Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.856653 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84de2c50-e584-4134-82bd-5868077005af" path="/var/lib/kubelet/pods/84de2c50-e584-4134-82bd-5868077005af/volumes" Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.857948 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abdcffc7-132e-4569-9a16-cae3202fcab8" path="/var/lib/kubelet/pods/abdcffc7-132e-4569-9a16-cae3202fcab8/volumes" Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.859294 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef8563f-f9a9-4ca6-a211-a7b1745337bf" path="/var/lib/kubelet/pods/aef8563f-f9a9-4ca6-a211-a7b1745337bf/volumes" Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.862090 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72" path="/var/lib/kubelet/pods/ba6283e7-9a9a-42d9-ada6-d7e2eb5f6b72/volumes" Jan 29 09:34:54 crc kubenswrapper[4771]: I0129 09:34:54.863633 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd" path="/var/lib/kubelet/pods/cf6c8fce-7eae-4e0b-a0f0-96219e6d81dd/volumes" Jan 29 09:34:55 crc kubenswrapper[4771]: I0129 09:34:55.837735 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:34:55 crc kubenswrapper[4771]: E0129 09:34:55.838227 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:34:56 crc kubenswrapper[4771]: I0129 09:34:56.029142 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wl28d"] Jan 29 09:34:56 crc kubenswrapper[4771]: I0129 09:34:56.049943 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wl28d"] Jan 29 09:34:56 crc kubenswrapper[4771]: I0129 09:34:56.860436 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a9c1def-826f-4029-94c3-5670ce333c66" path="/var/lib/kubelet/pods/2a9c1def-826f-4029-94c3-5670ce333c66/volumes" Jan 29 09:34:59 crc kubenswrapper[4771]: I0129 09:34:59.037272 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vrfbh"] Jan 29 09:34:59 crc kubenswrapper[4771]: I0129 09:34:59.046752 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vrfbh"] Jan 29 09:35:00 crc kubenswrapper[4771]: I0129 09:35:00.862233 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fc0688-0a4c-48f4-83c6-2aa6bbfcde29" path="/var/lib/kubelet/pods/94fc0688-0a4c-48f4-83c6-2aa6bbfcde29/volumes" Jan 29 09:35:07 crc kubenswrapper[4771]: I0129 09:35:07.838614 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:35:07 crc kubenswrapper[4771]: E0129 09:35:07.839328 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:35:19 crc kubenswrapper[4771]: I0129 09:35:19.838618 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:35:19 crc kubenswrapper[4771]: E0129 09:35:19.839440 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:35:26 crc kubenswrapper[4771]: I0129 09:35:26.928336 4771 generic.go:334] "Generic (PLEG): container finished" podID="8c4903c9-2b1b-4e50-9e27-49c4aa41974c" containerID="69a4b647d64f801d50f06f537757a83b335f3c79ca7c29a2241fc79ac951ae96" exitCode=0 Jan 29 09:35:26 crc kubenswrapper[4771]: I0129 09:35:26.928398 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" event={"ID":"8c4903c9-2b1b-4e50-9e27-49c4aa41974c","Type":"ContainerDied","Data":"69a4b647d64f801d50f06f537757a83b335f3c79ca7c29a2241fc79ac951ae96"} Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.326744 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.442822 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fpwd\" (UniqueName: \"kubernetes.io/projected/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-kube-api-access-8fpwd\") pod \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.443031 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-ssh-key-openstack-edpm-ipam\") pod \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.443052 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-inventory\") pod \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\" (UID: \"8c4903c9-2b1b-4e50-9e27-49c4aa41974c\") " Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.448531 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-kube-api-access-8fpwd" (OuterVolumeSpecName: "kube-api-access-8fpwd") pod "8c4903c9-2b1b-4e50-9e27-49c4aa41974c" (UID: "8c4903c9-2b1b-4e50-9e27-49c4aa41974c"). InnerVolumeSpecName "kube-api-access-8fpwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.480844 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-inventory" (OuterVolumeSpecName: "inventory") pod "8c4903c9-2b1b-4e50-9e27-49c4aa41974c" (UID: "8c4903c9-2b1b-4e50-9e27-49c4aa41974c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.480963 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c4903c9-2b1b-4e50-9e27-49c4aa41974c" (UID: "8c4903c9-2b1b-4e50-9e27-49c4aa41974c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.546490 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.546545 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.546564 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fpwd\" (UniqueName: \"kubernetes.io/projected/8c4903c9-2b1b-4e50-9e27-49c4aa41974c-kube-api-access-8fpwd\") on node \"crc\" DevicePath \"\"" Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.951794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" event={"ID":"8c4903c9-2b1b-4e50-9e27-49c4aa41974c","Type":"ContainerDied","Data":"d93228a94079b54edc68388af2d0659775aee9ebe192d78562a0ddec2c0a4544"} Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.951842 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93228a94079b54edc68388af2d0659775aee9ebe192d78562a0ddec2c0a4544" Jan 29 09:35:28 crc kubenswrapper[4771]: I0129 09:35:28.952085 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.032587 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s"] Jan 29 09:35:29 crc kubenswrapper[4771]: E0129 09:35:29.033201 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4903c9-2b1b-4e50-9e27-49c4aa41974c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.033230 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4903c9-2b1b-4e50-9e27-49c4aa41974c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.033566 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4903c9-2b1b-4e50-9e27-49c4aa41974c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.034630 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.037063 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.037122 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.037351 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.038077 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.042844 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s"] Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.156948 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.157005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcsng\" (UniqueName: \"kubernetes.io/projected/620aab50-8510-4a95-a53c-7dd7fac714b6-kube-api-access-vcsng\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.157163 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.258496 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.258540 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcsng\" (UniqueName: \"kubernetes.io/projected/620aab50-8510-4a95-a53c-7dd7fac714b6-kube-api-access-vcsng\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.258634 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.274104 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.274983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcsng\" (UniqueName: \"kubernetes.io/projected/620aab50-8510-4a95-a53c-7dd7fac714b6-kube-api-access-vcsng\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.274022 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.353420 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.873499 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s"] Jan 29 09:35:29 crc kubenswrapper[4771]: I0129 09:35:29.961921 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" event={"ID":"620aab50-8510-4a95-a53c-7dd7fac714b6","Type":"ContainerStarted","Data":"3448a28ebf5ea69a9bd9cde96ea33f3fbb2dd61b2a4418146493913d903de4f8"} Jan 29 09:35:30 crc kubenswrapper[4771]: I0129 09:35:30.845963 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:35:30 crc kubenswrapper[4771]: E0129 09:35:30.847569 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:35:30 crc kubenswrapper[4771]: I0129 09:35:30.971495 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" event={"ID":"620aab50-8510-4a95-a53c-7dd7fac714b6","Type":"ContainerStarted","Data":"250910494ec1cea50d1a5dafa4f4e4c635726b6f8b4958fa3ab7d085cf964371"} Jan 29 09:35:30 crc kubenswrapper[4771]: I0129 09:35:30.990442 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" podStartSLOduration=1.474388644 podStartE2EDuration="1.990424664s" podCreationTimestamp="2026-01-29 09:35:29 +0000 UTC" firstStartedPulling="2026-01-29 09:35:29.877365991 +0000 UTC m=+1750.000206218" lastFinishedPulling="2026-01-29 09:35:30.393402011 +0000 UTC m=+1750.516242238" observedRunningTime="2026-01-29 09:35:30.985386197 +0000 UTC m=+1751.108226424" watchObservedRunningTime="2026-01-29 09:35:30.990424664 +0000 UTC m=+1751.113264891" Jan 29 09:35:39 crc kubenswrapper[4771]: I0129 09:35:39.055212 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-p2ghj"] Jan 29 09:35:39 crc kubenswrapper[4771]: I0129 09:35:39.064241 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-p2ghj"] Jan 29 09:35:40 crc kubenswrapper[4771]: I0129 09:35:40.847631 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de5b9ec-6c6b-4e51-a053-d0076c2c729e" path="/var/lib/kubelet/pods/5de5b9ec-6c6b-4e51-a053-d0076c2c729e/volumes" Jan 29 09:35:43 crc kubenswrapper[4771]: I0129 09:35:43.762343 4771 scope.go:117] "RemoveContainer" containerID="0be693f3165b2f1f17525c09c9d6297b15e5a3ffb976f1933f2a65406b7b1d1d" Jan 29 09:35:43 crc kubenswrapper[4771]: I0129 09:35:43.806973 4771 scope.go:117] "RemoveContainer" containerID="dc0b5dc6591c35e10a162bd6b094351f8c915395a6e4867c6d65917ddc0aa69d" Jan 29 09:35:43 crc kubenswrapper[4771]: I0129 09:35:43.837886 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:35:43 crc kubenswrapper[4771]: E0129 09:35:43.838215 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:35:43 crc kubenswrapper[4771]: I0129 09:35:43.859734 4771 scope.go:117] "RemoveContainer" containerID="15e6be68d699d6de5d45565fc9a34c3da2cbb0b4979d3b469bbe1a9aec42a6a6" Jan 29 09:35:43 crc kubenswrapper[4771]: I0129 09:35:43.886083 4771 scope.go:117] "RemoveContainer" containerID="60a6d3abd8df862a73d1ff8180e088cb7adb003ad5b653d9379bb5316d4d3ed5" Jan 29 09:35:43 crc kubenswrapper[4771]: I0129 09:35:43.978412 4771 scope.go:117] "RemoveContainer" containerID="960c016c3255df5162075f77e1567fb0572043547391ed97fa6033c12dda713e" Jan 29 09:35:44 crc kubenswrapper[4771]: I0129 09:35:44.024298 4771 scope.go:117] "RemoveContainer" containerID="249ecd70e7c7b374e321a1b6edfc01dbb806f6f56f85ed815998d59ab3c7e6e5" Jan 29 09:35:44 crc kubenswrapper[4771]: I0129 09:35:44.064550 4771 scope.go:117] "RemoveContainer" containerID="06cb6c592df78188c827b6ed63abfbb10933762c80e2ffb812249b011e10533c" Jan 29 09:35:44 crc kubenswrapper[4771]: I0129 09:35:44.085153 4771 scope.go:117] "RemoveContainer" containerID="12cce7233911ce2afbecc7f41abeb2339f7c9c0f50aa596dea3521b06f33923d" Jan 29 09:35:44 crc kubenswrapper[4771]: I0129 09:35:44.111345 4771 scope.go:117] "RemoveContainer" containerID="02113350a64fb52a1a50a806562328d66a7f36ada03fabf6d1498be3bd7e7929" Jan 29 09:35:45 crc kubenswrapper[4771]: I0129 09:35:45.042420 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8v8h9"] Jan 29 09:35:45 crc kubenswrapper[4771]: I0129 09:35:45.051471 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8v8h9"] Jan 29 09:35:46 crc kubenswrapper[4771]: I0129 09:35:46.847778 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883872f0-cb88-4095-b918-b971d8c3c0b6" path="/var/lib/kubelet/pods/883872f0-cb88-4095-b918-b971d8c3c0b6/volumes" Jan 29 09:35:54 crc kubenswrapper[4771]: I0129 09:35:54.039618 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2mfct"] Jan 29 09:35:54 crc kubenswrapper[4771]: I0129 09:35:54.050139 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bvppf"] Jan 29 09:35:54 crc kubenswrapper[4771]: I0129 09:35:54.059431 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bvppf"] Jan 29 09:35:54 crc kubenswrapper[4771]: I0129 09:35:54.077199 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2mfct"] Jan 29 09:35:54 crc kubenswrapper[4771]: I0129 09:35:54.849317 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015c0ccc-d729-4d0a-8168-b897f1c451da" path="/var/lib/kubelet/pods/015c0ccc-d729-4d0a-8168-b897f1c451da/volumes" Jan 29 09:35:54 crc kubenswrapper[4771]: I0129 09:35:54.849935 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9e93f7-bada-4141-887c-4174d899b95e" path="/var/lib/kubelet/pods/8a9e93f7-bada-4141-887c-4174d899b95e/volumes" Jan 29 09:35:56 crc kubenswrapper[4771]: I0129 09:35:56.837560 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:35:56 crc kubenswrapper[4771]: E0129 09:35:56.838377 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:36:10 crc kubenswrapper[4771]: I0129 09:36:10.040612 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-g2fcd"] Jan 29 09:36:10 crc kubenswrapper[4771]: I0129 09:36:10.052224 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-g2fcd"] Jan 29 09:36:10 crc kubenswrapper[4771]: I0129 09:36:10.849389 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ab9c1d-3798-4151-bf0b-63227f0e45a4" path="/var/lib/kubelet/pods/29ab9c1d-3798-4151-bf0b-63227f0e45a4/volumes" Jan 29 09:36:11 crc kubenswrapper[4771]: I0129 09:36:11.838409 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:36:11 crc kubenswrapper[4771]: E0129 09:36:11.839833 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:36:22 crc kubenswrapper[4771]: I0129 09:36:22.839282 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:36:22 crc kubenswrapper[4771]: E0129 09:36:22.840678 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:36:37 crc kubenswrapper[4771]: I0129 09:36:37.838449 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:36:37 crc kubenswrapper[4771]: E0129 09:36:37.839163 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:36:44 crc kubenswrapper[4771]: I0129 09:36:44.308611 4771 scope.go:117] "RemoveContainer" containerID="91be09616b3fb43ae30e99014c8a1b5f10e78679d0a55fa87b96ff86e709a2ef" Jan 29 09:36:44 crc kubenswrapper[4771]: I0129 09:36:44.358522 4771 scope.go:117] "RemoveContainer" containerID="c7f0a480a6332a49cb2d4e45193ef1fed5cfded327bae127694ebc276bd37041" Jan 29 09:36:44 crc kubenswrapper[4771]: I0129 09:36:44.432790 4771 scope.go:117] "RemoveContainer" containerID="79076e4a83c2a714ab820355ea704fbdde238eb57135e539a9aa3f931719370a" Jan 29 09:36:44 crc kubenswrapper[4771]: I0129 09:36:44.474441 4771 scope.go:117] "RemoveContainer" containerID="ee279fb77130501382f07268d88b8252be9bf08b614342e0da6c83af22e3313b" Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.043123 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2549-account-create-update-7mcrn"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.050117 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4tqtd"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.059874 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2549-account-create-update-7mcrn"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.069577 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-aa3d-account-create-update-s9v84"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.079157 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-edb4-account-create-update-n2gmm"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.085897 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jq6c5"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.093240 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-frmrw"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.100057 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4tqtd"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.107230 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jq6c5"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.115014 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-aa3d-account-create-update-s9v84"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.123635 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-edb4-account-create-update-n2gmm"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.132159 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-frmrw"] Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.849888 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ce4d22-6cc6-4f30-9b5a-3870f8268fe9" path="/var/lib/kubelet/pods/21ce4d22-6cc6-4f30-9b5a-3870f8268fe9/volumes" Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.850811 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c01cd54-a15c-4367-8e1c-ee8cdc10373c" path="/var/lib/kubelet/pods/5c01cd54-a15c-4367-8e1c-ee8cdc10373c/volumes" Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.851545 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7037e6e8-a6d3-417e-9a83-091fd1492909" path="/var/lib/kubelet/pods/7037e6e8-a6d3-417e-9a83-091fd1492909/volumes" Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.852309 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdc5aac-5c6a-4ebd-95b6-34a876f299b4" path="/var/lib/kubelet/pods/7fdc5aac-5c6a-4ebd-95b6-34a876f299b4/volumes" Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.853850 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a56c69ee-17ef-4f36-b312-8a7ba10df44a" path="/var/lib/kubelet/pods/a56c69ee-17ef-4f36-b312-8a7ba10df44a/volumes" Jan 29 09:36:46 crc kubenswrapper[4771]: I0129 09:36:46.854558 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedabb0e-4487-468e-91d2-af4e8767a0d9" path="/var/lib/kubelet/pods/dedabb0e-4487-468e-91d2-af4e8767a0d9/volumes" Jan 29 09:36:48 crc kubenswrapper[4771]: I0129 09:36:48.838249 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:36:48 crc kubenswrapper[4771]: E0129 09:36:48.838807 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:36:58 crc kubenswrapper[4771]: I0129 09:36:58.779813 4771 generic.go:334] "Generic (PLEG): container finished" podID="620aab50-8510-4a95-a53c-7dd7fac714b6" containerID="250910494ec1cea50d1a5dafa4f4e4c635726b6f8b4958fa3ab7d085cf964371" exitCode=0 Jan 29 09:36:58 crc kubenswrapper[4771]: I0129 09:36:58.780837 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" event={"ID":"620aab50-8510-4a95-a53c-7dd7fac714b6","Type":"ContainerDied","Data":"250910494ec1cea50d1a5dafa4f4e4c635726b6f8b4958fa3ab7d085cf964371"} Jan 29 09:36:59 crc kubenswrapper[4771]: I0129 09:36:59.837704 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:36:59 crc kubenswrapper[4771]: E0129 09:36:59.838199 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.214779 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.295124 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-inventory\") pod \"620aab50-8510-4a95-a53c-7dd7fac714b6\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.295315 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-ssh-key-openstack-edpm-ipam\") pod \"620aab50-8510-4a95-a53c-7dd7fac714b6\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.295381 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcsng\" (UniqueName: \"kubernetes.io/projected/620aab50-8510-4a95-a53c-7dd7fac714b6-kube-api-access-vcsng\") pod \"620aab50-8510-4a95-a53c-7dd7fac714b6\" (UID: \"620aab50-8510-4a95-a53c-7dd7fac714b6\") " Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.304956 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620aab50-8510-4a95-a53c-7dd7fac714b6-kube-api-access-vcsng" (OuterVolumeSpecName: "kube-api-access-vcsng") pod "620aab50-8510-4a95-a53c-7dd7fac714b6" (UID: "620aab50-8510-4a95-a53c-7dd7fac714b6"). InnerVolumeSpecName "kube-api-access-vcsng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.322770 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-inventory" (OuterVolumeSpecName: "inventory") pod "620aab50-8510-4a95-a53c-7dd7fac714b6" (UID: "620aab50-8510-4a95-a53c-7dd7fac714b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.325384 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "620aab50-8510-4a95-a53c-7dd7fac714b6" (UID: "620aab50-8510-4a95-a53c-7dd7fac714b6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.397563 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcsng\" (UniqueName: \"kubernetes.io/projected/620aab50-8510-4a95-a53c-7dd7fac714b6-kube-api-access-vcsng\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.397597 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.397608 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/620aab50-8510-4a95-a53c-7dd7fac714b6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.802993 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" event={"ID":"620aab50-8510-4a95-a53c-7dd7fac714b6","Type":"ContainerDied","Data":"3448a28ebf5ea69a9bd9cde96ea33f3fbb2dd61b2a4418146493913d903de4f8"} Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.803041 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3448a28ebf5ea69a9bd9cde96ea33f3fbb2dd61b2a4418146493913d903de4f8" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.803063 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.885021 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw"] Jan 29 09:37:00 crc kubenswrapper[4771]: E0129 09:37:00.885495 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620aab50-8510-4a95-a53c-7dd7fac714b6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.885513 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="620aab50-8510-4a95-a53c-7dd7fac714b6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.885970 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="620aab50-8510-4a95-a53c-7dd7fac714b6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.886826 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.889349 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.889418 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.890257 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.890538 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:37:00 crc kubenswrapper[4771]: I0129 09:37:00.895838 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw"] Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.008477 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7dv\" (UniqueName: \"kubernetes.io/projected/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-kube-api-access-ss7dv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.008774 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.008967 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.111155 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7dv\" (UniqueName: \"kubernetes.io/projected/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-kube-api-access-ss7dv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.111220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.111328 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.116531 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.123327 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.131448 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7dv\" (UniqueName: \"kubernetes.io/projected/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-kube-api-access-ss7dv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bssfw\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.206280 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.743542 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw"] Jan 29 09:37:01 crc kubenswrapper[4771]: I0129 09:37:01.812275 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" event={"ID":"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7","Type":"ContainerStarted","Data":"c3462e706b0cd9d6c81ad858412eda49614546af2f2a810a4fbb74fdf1854874"} Jan 29 09:37:02 crc kubenswrapper[4771]: I0129 09:37:02.827341 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" event={"ID":"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7","Type":"ContainerStarted","Data":"ee333af9fef2d5a6a975ee1e089e20ef6ecd5a5ba7bf1e073e721d142dcc1c6a"} Jan 29 09:37:02 crc kubenswrapper[4771]: I0129 09:37:02.858132 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" podStartSLOduration=2.233419808 podStartE2EDuration="2.858103094s" podCreationTimestamp="2026-01-29 09:37:00 +0000 UTC" firstStartedPulling="2026-01-29 09:37:01.743926471 +0000 UTC m=+1841.866766698" lastFinishedPulling="2026-01-29 09:37:02.368609757 +0000 UTC m=+1842.491449984" observedRunningTime="2026-01-29 09:37:02.854758713 +0000 UTC m=+1842.977598960" watchObservedRunningTime="2026-01-29 09:37:02.858103094 +0000 UTC m=+1842.980943341" Jan 29 09:37:03 crc kubenswrapper[4771]: I0129 09:37:03.032264 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-55bc4c6647-vmgxk" podUID="70ca5b45-1804-4830-8ede-b28279d8d4ce" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 29 09:37:07 crc kubenswrapper[4771]: I0129 09:37:07.873056 4771 generic.go:334] "Generic (PLEG): container finished" podID="8c8a65a0-1d3a-413d-964f-71d69bb1c3b7" containerID="ee333af9fef2d5a6a975ee1e089e20ef6ecd5a5ba7bf1e073e721d142dcc1c6a" exitCode=0 Jan 29 09:37:07 crc kubenswrapper[4771]: I0129 09:37:07.873159 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" event={"ID":"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7","Type":"ContainerDied","Data":"ee333af9fef2d5a6a975ee1e089e20ef6ecd5a5ba7bf1e073e721d142dcc1c6a"} Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.248592 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.370061 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7dv\" (UniqueName: \"kubernetes.io/projected/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-kube-api-access-ss7dv\") pod \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.370378 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-inventory\") pod \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.370472 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-ssh-key-openstack-edpm-ipam\") pod \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\" (UID: \"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7\") " Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.379110 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-kube-api-access-ss7dv" (OuterVolumeSpecName: "kube-api-access-ss7dv") pod "8c8a65a0-1d3a-413d-964f-71d69bb1c3b7" (UID: "8c8a65a0-1d3a-413d-964f-71d69bb1c3b7"). InnerVolumeSpecName "kube-api-access-ss7dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.405513 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c8a65a0-1d3a-413d-964f-71d69bb1c3b7" (UID: "8c8a65a0-1d3a-413d-964f-71d69bb1c3b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.409983 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-inventory" (OuterVolumeSpecName: "inventory") pod "8c8a65a0-1d3a-413d-964f-71d69bb1c3b7" (UID: "8c8a65a0-1d3a-413d-964f-71d69bb1c3b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.473601 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7dv\" (UniqueName: \"kubernetes.io/projected/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-kube-api-access-ss7dv\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.473647 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.473658 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c8a65a0-1d3a-413d-964f-71d69bb1c3b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.890193 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" event={"ID":"8c8a65a0-1d3a-413d-964f-71d69bb1c3b7","Type":"ContainerDied","Data":"c3462e706b0cd9d6c81ad858412eda49614546af2f2a810a4fbb74fdf1854874"} Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.890230 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3462e706b0cd9d6c81ad858412eda49614546af2f2a810a4fbb74fdf1854874" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.890264 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bssfw" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.962606 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb"] Jan 29 09:37:09 crc kubenswrapper[4771]: E0129 09:37:09.963012 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8a65a0-1d3a-413d-964f-71d69bb1c3b7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.963030 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8a65a0-1d3a-413d-964f-71d69bb1c3b7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.963211 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8a65a0-1d3a-413d-964f-71d69bb1c3b7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.963810 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.966789 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.970995 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.971227 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.972834 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:37:09 crc kubenswrapper[4771]: I0129 09:37:09.973034 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb"] Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.084246 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.084351 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.084745 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbcps\" (UniqueName: \"kubernetes.io/projected/72d4ad1e-f80c-43d6-a515-3f08c23df279-kube-api-access-mbcps\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.186819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.187158 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbcps\" (UniqueName: \"kubernetes.io/projected/72d4ad1e-f80c-43d6-a515-3f08c23df279-kube-api-access-mbcps\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.187216 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.196309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.196713 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.204146 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbcps\" (UniqueName: \"kubernetes.io/projected/72d4ad1e-f80c-43d6-a515-3f08c23df279-kube-api-access-mbcps\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zb8mb\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.281810 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.791755 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb"] Jan 29 09:37:10 crc kubenswrapper[4771]: I0129 09:37:10.902237 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" event={"ID":"72d4ad1e-f80c-43d6-a515-3f08c23df279","Type":"ContainerStarted","Data":"69f7cd997fc579f5c23cabfb94bfca6db8beec951282cf9489cc798bf9ff1d24"} Jan 29 09:37:12 crc kubenswrapper[4771]: I0129 09:37:12.923673 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" event={"ID":"72d4ad1e-f80c-43d6-a515-3f08c23df279","Type":"ContainerStarted","Data":"1576c2ce6b6a899a4e7968ec07967ecdf7a690d5affb960091965bdc741628e7"} Jan 29 09:37:13 crc kubenswrapper[4771]: I0129 09:37:13.838017 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:37:13 crc kubenswrapper[4771]: E0129 09:37:13.838546 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:37:26 crc kubenswrapper[4771]: I0129 09:37:26.838305 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:37:26 crc kubenswrapper[4771]: E0129 09:37:26.839559 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:37:39 crc kubenswrapper[4771]: I0129 09:37:39.067307 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" podStartSLOduration=28.58834774 podStartE2EDuration="30.067284127s" podCreationTimestamp="2026-01-29 09:37:09 +0000 UTC" firstStartedPulling="2026-01-29 09:37:10.798655902 +0000 UTC m=+1850.921496139" lastFinishedPulling="2026-01-29 09:37:12.277592259 +0000 UTC m=+1852.400432526" observedRunningTime="2026-01-29 09:37:12.950379184 +0000 UTC m=+1853.073219411" watchObservedRunningTime="2026-01-29 09:37:39.067284127 +0000 UTC m=+1879.190124354" Jan 29 09:37:39 crc kubenswrapper[4771]: I0129 09:37:39.068608 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vlnn"] Jan 29 09:37:39 crc kubenswrapper[4771]: I0129 09:37:39.078637 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vlnn"] Jan 29 09:37:39 crc kubenswrapper[4771]: I0129 09:37:39.839499 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:37:39 crc kubenswrapper[4771]: E0129 09:37:39.839800 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:37:40 crc kubenswrapper[4771]: I0129 09:37:40.847754 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d3d46b-96ba-460a-a066-52c0a041d34f" path="/var/lib/kubelet/pods/79d3d46b-96ba-460a-a066-52c0a041d34f/volumes" Jan 29 09:37:44 crc kubenswrapper[4771]: I0129 09:37:44.655601 4771 scope.go:117] "RemoveContainer" containerID="4e811faffbc5ff9f7e22015d649fddb381a32e1afc795b0fdfdf9d23fcc57ca6" Jan 29 09:37:44 crc kubenswrapper[4771]: I0129 09:37:44.681150 4771 scope.go:117] "RemoveContainer" containerID="32e795fb5ebed137c7535b77ecf3169466bb6497a37947a16b61fd960a6a0981" Jan 29 09:37:44 crc kubenswrapper[4771]: I0129 09:37:44.716389 4771 scope.go:117] "RemoveContainer" containerID="c53dd064e0c166279cf4926de8577748404b7bf625bffdbe87505a19726b8404" Jan 29 09:37:44 crc kubenswrapper[4771]: I0129 09:37:44.790055 4771 scope.go:117] "RemoveContainer" containerID="4ea2aabfb0aafeb151996dd2b11d5ef6da9fed5474cb8400065466d43c6cae1d" Jan 29 09:37:44 crc kubenswrapper[4771]: I0129 09:37:44.820976 4771 scope.go:117] "RemoveContainer" containerID="12c0ba4d183513e5f0a76674d005402c3aaa32d532b3a050e3f523edee225af6" Jan 29 09:37:44 crc kubenswrapper[4771]: I0129 09:37:44.862508 4771 scope.go:117] "RemoveContainer" containerID="c2cbe0f39a01d44990aab5ebaba7776d60b888e68d640ab5966658ec3444230e" Jan 29 09:37:44 crc kubenswrapper[4771]: I0129 09:37:44.908376 4771 scope.go:117] "RemoveContainer" containerID="1938a20f726df6da47c3e0b88eac6ef1060e2a20623496094e6f240d59e7c1fe" Jan 29 09:37:47 crc kubenswrapper[4771]: E0129 09:37:47.222532 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d4ad1e_f80c_43d6_a515_3f08c23df279.slice/crio-1576c2ce6b6a899a4e7968ec07967ecdf7a690d5affb960091965bdc741628e7.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:37:47 crc kubenswrapper[4771]: I0129 09:37:47.222724 4771 generic.go:334] "Generic (PLEG): container finished" podID="72d4ad1e-f80c-43d6-a515-3f08c23df279" containerID="1576c2ce6b6a899a4e7968ec07967ecdf7a690d5affb960091965bdc741628e7" exitCode=0 Jan 29 09:37:47 crc kubenswrapper[4771]: I0129 09:37:47.222752 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" event={"ID":"72d4ad1e-f80c-43d6-a515-3f08c23df279","Type":"ContainerDied","Data":"1576c2ce6b6a899a4e7968ec07967ecdf7a690d5affb960091965bdc741628e7"} Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.635992 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.829547 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbcps\" (UniqueName: \"kubernetes.io/projected/72d4ad1e-f80c-43d6-a515-3f08c23df279-kube-api-access-mbcps\") pod \"72d4ad1e-f80c-43d6-a515-3f08c23df279\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.829788 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-inventory\") pod \"72d4ad1e-f80c-43d6-a515-3f08c23df279\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.829935 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-ssh-key-openstack-edpm-ipam\") pod \"72d4ad1e-f80c-43d6-a515-3f08c23df279\" (UID: \"72d4ad1e-f80c-43d6-a515-3f08c23df279\") " Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.836962 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d4ad1e-f80c-43d6-a515-3f08c23df279-kube-api-access-mbcps" (OuterVolumeSpecName: "kube-api-access-mbcps") pod "72d4ad1e-f80c-43d6-a515-3f08c23df279" (UID: "72d4ad1e-f80c-43d6-a515-3f08c23df279"). InnerVolumeSpecName "kube-api-access-mbcps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.863619 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-inventory" (OuterVolumeSpecName: "inventory") pod "72d4ad1e-f80c-43d6-a515-3f08c23df279" (UID: "72d4ad1e-f80c-43d6-a515-3f08c23df279"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.874917 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72d4ad1e-f80c-43d6-a515-3f08c23df279" (UID: "72d4ad1e-f80c-43d6-a515-3f08c23df279"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.933750 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.933810 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbcps\" (UniqueName: \"kubernetes.io/projected/72d4ad1e-f80c-43d6-a515-3f08c23df279-kube-api-access-mbcps\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:48 crc kubenswrapper[4771]: I0129 09:37:48.933822 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72d4ad1e-f80c-43d6-a515-3f08c23df279-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.242519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" event={"ID":"72d4ad1e-f80c-43d6-a515-3f08c23df279","Type":"ContainerDied","Data":"69f7cd997fc579f5c23cabfb94bfca6db8beec951282cf9489cc798bf9ff1d24"} Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.242568 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zb8mb" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.242583 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f7cd997fc579f5c23cabfb94bfca6db8beec951282cf9489cc798bf9ff1d24" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.347424 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs"] Jan 29 09:37:49 crc kubenswrapper[4771]: E0129 09:37:49.347820 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d4ad1e-f80c-43d6-a515-3f08c23df279" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.347838 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d4ad1e-f80c-43d6-a515-3f08c23df279" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.348052 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d4ad1e-f80c-43d6-a515-3f08c23df279" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.348641 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.351846 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.351983 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.352417 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.353971 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.367238 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs"] Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.444106 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.444362 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsppf\" (UniqueName: \"kubernetes.io/projected/880c583d-29a4-44e0-83b0-16795d5eac98-kube-api-access-zsppf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.444655 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.547036 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.547344 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsppf\" (UniqueName: \"kubernetes.io/projected/880c583d-29a4-44e0-83b0-16795d5eac98-kube-api-access-zsppf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.547496 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.551988 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.555822 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.572590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsppf\" (UniqueName: \"kubernetes.io/projected/880c583d-29a4-44e0-83b0-16795d5eac98-kube-api-access-zsppf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:49 crc kubenswrapper[4771]: I0129 09:37:49.665750 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:37:50 crc kubenswrapper[4771]: I0129 09:37:50.209385 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs"] Jan 29 09:37:50 crc kubenswrapper[4771]: I0129 09:37:50.252590 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" event={"ID":"880c583d-29a4-44e0-83b0-16795d5eac98","Type":"ContainerStarted","Data":"ea59c83c602da8cfe7379da1dd2d99cbc5a2b3240deed95a235a1d8886c6efad"} Jan 29 09:37:51 crc kubenswrapper[4771]: I0129 09:37:51.262782 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" event={"ID":"880c583d-29a4-44e0-83b0-16795d5eac98","Type":"ContainerStarted","Data":"c8928bdf65ba15c4ce68a3b29160d9c0038d70697599f884d4b891d8def54b00"} Jan 29 09:37:51 crc kubenswrapper[4771]: I0129 09:37:51.280662 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" podStartSLOduration=1.6431628809999999 podStartE2EDuration="2.280643756s" podCreationTimestamp="2026-01-29 09:37:49 +0000 UTC" firstStartedPulling="2026-01-29 09:37:50.215137915 +0000 UTC m=+1890.337978142" lastFinishedPulling="2026-01-29 09:37:50.85261879 +0000 UTC m=+1890.975459017" observedRunningTime="2026-01-29 09:37:51.276533974 +0000 UTC m=+1891.399374201" watchObservedRunningTime="2026-01-29 09:37:51.280643756 +0000 UTC m=+1891.403483983" Jan 29 09:37:51 crc kubenswrapper[4771]: I0129 09:37:51.838548 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:37:51 crc kubenswrapper[4771]: E0129 09:37:51.838962 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:38:02 crc kubenswrapper[4771]: I0129 09:38:02.045206 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4dv79"] Jan 29 09:38:02 crc kubenswrapper[4771]: I0129 09:38:02.080572 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4dv79"] Jan 29 09:38:02 crc kubenswrapper[4771]: I0129 09:38:02.851203 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ee2c782-d165-4a0b-bd83-9f506dd349b1" path="/var/lib/kubelet/pods/9ee2c782-d165-4a0b-bd83-9f506dd349b1/volumes" Jan 29 09:38:04 crc kubenswrapper[4771]: I0129 09:38:04.026042 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kj4q4"] Jan 29 09:38:04 crc kubenswrapper[4771]: I0129 09:38:04.033874 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kj4q4"] Jan 29 09:38:04 crc kubenswrapper[4771]: I0129 09:38:04.848464 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e458d2e-a019-469c-aee4-869073bfd47b" path="/var/lib/kubelet/pods/6e458d2e-a019-469c-aee4-869073bfd47b/volumes" Jan 29 09:38:05 crc kubenswrapper[4771]: I0129 09:38:05.838825 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:38:05 crc kubenswrapper[4771]: E0129 09:38:05.839160 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:38:17 crc kubenswrapper[4771]: I0129 09:38:17.001328 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:38:17 crc kubenswrapper[4771]: I0129 09:38:17.501807 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"31676f06b71cda577f4f9037e4996c95a39d3e20387b8d818c901817022dfe5a"} Jan 29 09:38:38 crc kubenswrapper[4771]: I0129 09:38:38.678515 4771 generic.go:334] "Generic (PLEG): container finished" podID="880c583d-29a4-44e0-83b0-16795d5eac98" containerID="c8928bdf65ba15c4ce68a3b29160d9c0038d70697599f884d4b891d8def54b00" exitCode=0 Jan 29 09:38:38 crc kubenswrapper[4771]: I0129 09:38:38.678889 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" event={"ID":"880c583d-29a4-44e0-83b0-16795d5eac98","Type":"ContainerDied","Data":"c8928bdf65ba15c4ce68a3b29160d9c0038d70697599f884d4b891d8def54b00"} Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.131024 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.245932 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-inventory\") pod \"880c583d-29a4-44e0-83b0-16795d5eac98\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.246101 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsppf\" (UniqueName: \"kubernetes.io/projected/880c583d-29a4-44e0-83b0-16795d5eac98-kube-api-access-zsppf\") pod \"880c583d-29a4-44e0-83b0-16795d5eac98\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.246149 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-ssh-key-openstack-edpm-ipam\") pod \"880c583d-29a4-44e0-83b0-16795d5eac98\" (UID: \"880c583d-29a4-44e0-83b0-16795d5eac98\") " Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.252669 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/880c583d-29a4-44e0-83b0-16795d5eac98-kube-api-access-zsppf" (OuterVolumeSpecName: "kube-api-access-zsppf") pod "880c583d-29a4-44e0-83b0-16795d5eac98" (UID: "880c583d-29a4-44e0-83b0-16795d5eac98"). InnerVolumeSpecName "kube-api-access-zsppf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.288885 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-inventory" (OuterVolumeSpecName: "inventory") pod "880c583d-29a4-44e0-83b0-16795d5eac98" (UID: "880c583d-29a4-44e0-83b0-16795d5eac98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.299815 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "880c583d-29a4-44e0-83b0-16795d5eac98" (UID: "880c583d-29a4-44e0-83b0-16795d5eac98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.348909 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.348947 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsppf\" (UniqueName: \"kubernetes.io/projected/880c583d-29a4-44e0-83b0-16795d5eac98-kube-api-access-zsppf\") on node \"crc\" DevicePath \"\"" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.348958 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/880c583d-29a4-44e0-83b0-16795d5eac98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.705320 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" event={"ID":"880c583d-29a4-44e0-83b0-16795d5eac98","Type":"ContainerDied","Data":"ea59c83c602da8cfe7379da1dd2d99cbc5a2b3240deed95a235a1d8886c6efad"} Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.705671 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea59c83c602da8cfe7379da1dd2d99cbc5a2b3240deed95a235a1d8886c6efad" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.705381 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.779475 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mb82t"] Jan 29 09:38:40 crc kubenswrapper[4771]: E0129 09:38:40.779907 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="880c583d-29a4-44e0-83b0-16795d5eac98" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.779929 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="880c583d-29a4-44e0-83b0-16795d5eac98" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.780202 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="880c583d-29a4-44e0-83b0-16795d5eac98" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.780968 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.782932 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.783143 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.783156 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.783462 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.788949 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mb82t"] Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.966329 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.966410 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkz8c\" (UniqueName: \"kubernetes.io/projected/b4199724-f14d-423d-82f1-8a1438e624fb-kube-api-access-jkz8c\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:40 crc kubenswrapper[4771]: I0129 09:38:40.966554 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:41 crc kubenswrapper[4771]: I0129 09:38:41.069021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:41 crc kubenswrapper[4771]: I0129 09:38:41.069154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:41 crc kubenswrapper[4771]: I0129 09:38:41.069206 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkz8c\" (UniqueName: \"kubernetes.io/projected/b4199724-f14d-423d-82f1-8a1438e624fb-kube-api-access-jkz8c\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:41 crc kubenswrapper[4771]: I0129 09:38:41.073211 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:41 crc kubenswrapper[4771]: I0129 09:38:41.073909 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:41 crc kubenswrapper[4771]: I0129 09:38:41.099384 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkz8c\" (UniqueName: \"kubernetes.io/projected/b4199724-f14d-423d-82f1-8a1438e624fb-kube-api-access-jkz8c\") pod \"ssh-known-hosts-edpm-deployment-mb82t\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:41 crc kubenswrapper[4771]: I0129 09:38:41.399340 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:42 crc kubenswrapper[4771]: I0129 09:38:42.007995 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mb82t"] Jan 29 09:38:42 crc kubenswrapper[4771]: I0129 09:38:42.010270 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:38:42 crc kubenswrapper[4771]: I0129 09:38:42.736016 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" event={"ID":"b4199724-f14d-423d-82f1-8a1438e624fb","Type":"ContainerStarted","Data":"a8eb6e715948631c82aa53898d3418105fa60c6b2423dc0c2517e4e4b3f82ba4"} Jan 29 09:38:42 crc kubenswrapper[4771]: I0129 09:38:42.736307 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" event={"ID":"b4199724-f14d-423d-82f1-8a1438e624fb","Type":"ContainerStarted","Data":"e184951e53b1dbefe99cc8ca112b867922047a83b6e64a53b92997510b2588c5"} Jan 29 09:38:42 crc kubenswrapper[4771]: I0129 09:38:42.756354 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" podStartSLOduration=2.259097733 podStartE2EDuration="2.756334941s" podCreationTimestamp="2026-01-29 09:38:40 +0000 UTC" firstStartedPulling="2026-01-29 09:38:42.009976605 +0000 UTC m=+1942.132816832" lastFinishedPulling="2026-01-29 09:38:42.507213803 +0000 UTC m=+1942.630054040" observedRunningTime="2026-01-29 09:38:42.749759952 +0000 UTC m=+1942.872600189" watchObservedRunningTime="2026-01-29 09:38:42.756334941 +0000 UTC m=+1942.879175168" Jan 29 09:38:45 crc kubenswrapper[4771]: I0129 09:38:45.043469 4771 scope.go:117] "RemoveContainer" containerID="cad377085e919a01bef4cc3703f76e157fa912ada11dbde61abdaa398f83fdfc" Jan 29 09:38:45 crc kubenswrapper[4771]: I0129 09:38:45.093685 4771 scope.go:117] "RemoveContainer" containerID="92df250051b5a4b3817ccfc9478c4f6d465f2e4078aff42e30781aa97c307fb5" Jan 29 09:38:46 crc kubenswrapper[4771]: I0129 09:38:46.044255 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tnw9v"] Jan 29 09:38:46 crc kubenswrapper[4771]: I0129 09:38:46.051502 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tnw9v"] Jan 29 09:38:46 crc kubenswrapper[4771]: I0129 09:38:46.848388 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc41997c-ddd8-46fd-8c3f-b7bddcf11b59" path="/var/lib/kubelet/pods/bc41997c-ddd8-46fd-8c3f-b7bddcf11b59/volumes" Jan 29 09:38:49 crc kubenswrapper[4771]: I0129 09:38:49.812059 4771 generic.go:334] "Generic (PLEG): container finished" podID="b4199724-f14d-423d-82f1-8a1438e624fb" containerID="a8eb6e715948631c82aa53898d3418105fa60c6b2423dc0c2517e4e4b3f82ba4" exitCode=0 Jan 29 09:38:49 crc kubenswrapper[4771]: I0129 09:38:49.812135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" event={"ID":"b4199724-f14d-423d-82f1-8a1438e624fb","Type":"ContainerDied","Data":"a8eb6e715948631c82aa53898d3418105fa60c6b2423dc0c2517e4e4b3f82ba4"} Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.295647 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.469614 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkz8c\" (UniqueName: \"kubernetes.io/projected/b4199724-f14d-423d-82f1-8a1438e624fb-kube-api-access-jkz8c\") pod \"b4199724-f14d-423d-82f1-8a1438e624fb\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.470020 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-ssh-key-openstack-edpm-ipam\") pod \"b4199724-f14d-423d-82f1-8a1438e624fb\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.470175 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-inventory-0\") pod \"b4199724-f14d-423d-82f1-8a1438e624fb\" (UID: \"b4199724-f14d-423d-82f1-8a1438e624fb\") " Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.478887 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4199724-f14d-423d-82f1-8a1438e624fb-kube-api-access-jkz8c" (OuterVolumeSpecName: "kube-api-access-jkz8c") pod "b4199724-f14d-423d-82f1-8a1438e624fb" (UID: "b4199724-f14d-423d-82f1-8a1438e624fb"). InnerVolumeSpecName "kube-api-access-jkz8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.497394 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b4199724-f14d-423d-82f1-8a1438e624fb" (UID: "b4199724-f14d-423d-82f1-8a1438e624fb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.497850 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b4199724-f14d-423d-82f1-8a1438e624fb" (UID: "b4199724-f14d-423d-82f1-8a1438e624fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.571879 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkz8c\" (UniqueName: \"kubernetes.io/projected/b4199724-f14d-423d-82f1-8a1438e624fb-kube-api-access-jkz8c\") on node \"crc\" DevicePath \"\"" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.571913 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.571950 4771 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4199724-f14d-423d-82f1-8a1438e624fb-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.833248 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" event={"ID":"b4199724-f14d-423d-82f1-8a1438e624fb","Type":"ContainerDied","Data":"e184951e53b1dbefe99cc8ca112b867922047a83b6e64a53b92997510b2588c5"} Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.833295 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e184951e53b1dbefe99cc8ca112b867922047a83b6e64a53b92997510b2588c5" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.833348 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mb82t" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.928374 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4"] Jan 29 09:38:51 crc kubenswrapper[4771]: E0129 09:38:51.928864 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4199724-f14d-423d-82f1-8a1438e624fb" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.928885 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4199724-f14d-423d-82f1-8a1438e624fb" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.929042 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4199724-f14d-423d-82f1-8a1438e624fb" containerName="ssh-known-hosts-edpm-deployment" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.929726 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.933104 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.933423 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.933304 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.936441 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:38:51 crc kubenswrapper[4771]: I0129 09:38:51.936817 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4"] Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.080439 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.080949 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.081148 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbdw\" (UniqueName: \"kubernetes.io/projected/183736e8-0ae6-459f-9dbc-1b5a9d60539d-kube-api-access-qxbdw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.183081 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbdw\" (UniqueName: \"kubernetes.io/projected/183736e8-0ae6-459f-9dbc-1b5a9d60539d-kube-api-access-qxbdw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.183175 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.183257 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.189682 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.196304 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.204037 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbdw\" (UniqueName: \"kubernetes.io/projected/183736e8-0ae6-459f-9dbc-1b5a9d60539d-kube-api-access-qxbdw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dzfr4\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.247019 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.764837 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4"] Jan 29 09:38:52 crc kubenswrapper[4771]: I0129 09:38:52.857154 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" event={"ID":"183736e8-0ae6-459f-9dbc-1b5a9d60539d","Type":"ContainerStarted","Data":"dafbeab52bcf3a30b19373e76c2ef651761308b1b23737dadbd872068fa3c566"} Jan 29 09:38:53 crc kubenswrapper[4771]: I0129 09:38:53.865353 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" event={"ID":"183736e8-0ae6-459f-9dbc-1b5a9d60539d","Type":"ContainerStarted","Data":"48537cc1a78f7dc0d3cf9a58e46213eaf96d3e00099f1cbd65134ead11d126e9"} Jan 29 09:38:53 crc kubenswrapper[4771]: I0129 09:38:53.882146 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" podStartSLOduration=2.426008559 podStartE2EDuration="2.882128179s" podCreationTimestamp="2026-01-29 09:38:51 +0000 UTC" firstStartedPulling="2026-01-29 09:38:52.771115492 +0000 UTC m=+1952.893955719" lastFinishedPulling="2026-01-29 09:38:53.227235112 +0000 UTC m=+1953.350075339" observedRunningTime="2026-01-29 09:38:53.877216285 +0000 UTC m=+1954.000056512" watchObservedRunningTime="2026-01-29 09:38:53.882128179 +0000 UTC m=+1954.004968416" Jan 29 09:39:00 crc kubenswrapper[4771]: I0129 09:39:00.922450 4771 generic.go:334] "Generic (PLEG): container finished" podID="183736e8-0ae6-459f-9dbc-1b5a9d60539d" containerID="48537cc1a78f7dc0d3cf9a58e46213eaf96d3e00099f1cbd65134ead11d126e9" exitCode=0 Jan 29 09:39:00 crc kubenswrapper[4771]: I0129 09:39:00.922566 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" event={"ID":"183736e8-0ae6-459f-9dbc-1b5a9d60539d","Type":"ContainerDied","Data":"48537cc1a78f7dc0d3cf9a58e46213eaf96d3e00099f1cbd65134ead11d126e9"} Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.323387 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.509714 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbdw\" (UniqueName: \"kubernetes.io/projected/183736e8-0ae6-459f-9dbc-1b5a9d60539d-kube-api-access-qxbdw\") pod \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.510067 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-ssh-key-openstack-edpm-ipam\") pod \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.510153 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-inventory\") pod \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\" (UID: \"183736e8-0ae6-459f-9dbc-1b5a9d60539d\") " Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.515334 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183736e8-0ae6-459f-9dbc-1b5a9d60539d-kube-api-access-qxbdw" (OuterVolumeSpecName: "kube-api-access-qxbdw") pod "183736e8-0ae6-459f-9dbc-1b5a9d60539d" (UID: "183736e8-0ae6-459f-9dbc-1b5a9d60539d"). InnerVolumeSpecName "kube-api-access-qxbdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.535467 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-inventory" (OuterVolumeSpecName: "inventory") pod "183736e8-0ae6-459f-9dbc-1b5a9d60539d" (UID: "183736e8-0ae6-459f-9dbc-1b5a9d60539d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.537032 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "183736e8-0ae6-459f-9dbc-1b5a9d60539d" (UID: "183736e8-0ae6-459f-9dbc-1b5a9d60539d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.612214 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.612247 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/183736e8-0ae6-459f-9dbc-1b5a9d60539d-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.612256 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbdw\" (UniqueName: \"kubernetes.io/projected/183736e8-0ae6-459f-9dbc-1b5a9d60539d-kube-api-access-qxbdw\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.941549 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" event={"ID":"183736e8-0ae6-459f-9dbc-1b5a9d60539d","Type":"ContainerDied","Data":"dafbeab52bcf3a30b19373e76c2ef651761308b1b23737dadbd872068fa3c566"} Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.941593 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafbeab52bcf3a30b19373e76c2ef651761308b1b23737dadbd872068fa3c566" Jan 29 09:39:02 crc kubenswrapper[4771]: I0129 09:39:02.941630 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dzfr4" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.023024 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs"] Jan 29 09:39:03 crc kubenswrapper[4771]: E0129 09:39:03.023613 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183736e8-0ae6-459f-9dbc-1b5a9d60539d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.023630 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="183736e8-0ae6-459f-9dbc-1b5a9d60539d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.023843 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="183736e8-0ae6-459f-9dbc-1b5a9d60539d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.024476 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.026538 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.026815 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.027328 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.027887 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.032999 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs"] Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.222272 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zwtx\" (UniqueName: \"kubernetes.io/projected/665612c6-6f64-4e6e-a9d7-770665c7abff-kube-api-access-6zwtx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.222961 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.223040 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.329779 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zwtx\" (UniqueName: \"kubernetes.io/projected/665612c6-6f64-4e6e-a9d7-770665c7abff-kube-api-access-6zwtx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.329956 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.330113 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.336021 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.344656 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.354872 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zwtx\" (UniqueName: \"kubernetes.io/projected/665612c6-6f64-4e6e-a9d7-770665c7abff-kube-api-access-6zwtx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:03 crc kubenswrapper[4771]: I0129 09:39:03.655132 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:04 crc kubenswrapper[4771]: I0129 09:39:04.214826 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs"] Jan 29 09:39:04 crc kubenswrapper[4771]: I0129 09:39:04.962352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" event={"ID":"665612c6-6f64-4e6e-a9d7-770665c7abff","Type":"ContainerStarted","Data":"5a919980c207820e916a636a7580f2db04084bcea2fdf6ab0feda00ba0e904ea"} Jan 29 09:39:04 crc kubenswrapper[4771]: I0129 09:39:04.962840 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" event={"ID":"665612c6-6f64-4e6e-a9d7-770665c7abff","Type":"ContainerStarted","Data":"879e3ddf36f42963e574725c40c2ee2124b9489da4ba46eb0a092ab8f49f6dd3"} Jan 29 09:39:04 crc kubenswrapper[4771]: I0129 09:39:04.987684 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" podStartSLOduration=1.568075391 podStartE2EDuration="1.987647386s" podCreationTimestamp="2026-01-29 09:39:03 +0000 UTC" firstStartedPulling="2026-01-29 09:39:04.222633173 +0000 UTC m=+1964.345473400" lastFinishedPulling="2026-01-29 09:39:04.642205168 +0000 UTC m=+1964.765045395" observedRunningTime="2026-01-29 09:39:04.980237015 +0000 UTC m=+1965.103077252" watchObservedRunningTime="2026-01-29 09:39:04.987647386 +0000 UTC m=+1965.110487613" Jan 29 09:39:14 crc kubenswrapper[4771]: I0129 09:39:14.039158 4771 generic.go:334] "Generic (PLEG): container finished" podID="665612c6-6f64-4e6e-a9d7-770665c7abff" containerID="5a919980c207820e916a636a7580f2db04084bcea2fdf6ab0feda00ba0e904ea" exitCode=0 Jan 29 09:39:14 crc kubenswrapper[4771]: I0129 09:39:14.039465 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" event={"ID":"665612c6-6f64-4e6e-a9d7-770665c7abff","Type":"ContainerDied","Data":"5a919980c207820e916a636a7580f2db04084bcea2fdf6ab0feda00ba0e904ea"} Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.426093 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.572329 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zwtx\" (UniqueName: \"kubernetes.io/projected/665612c6-6f64-4e6e-a9d7-770665c7abff-kube-api-access-6zwtx\") pod \"665612c6-6f64-4e6e-a9d7-770665c7abff\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.572676 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-inventory\") pod \"665612c6-6f64-4e6e-a9d7-770665c7abff\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.572813 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-ssh-key-openstack-edpm-ipam\") pod \"665612c6-6f64-4e6e-a9d7-770665c7abff\" (UID: \"665612c6-6f64-4e6e-a9d7-770665c7abff\") " Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.584914 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665612c6-6f64-4e6e-a9d7-770665c7abff-kube-api-access-6zwtx" (OuterVolumeSpecName: "kube-api-access-6zwtx") pod "665612c6-6f64-4e6e-a9d7-770665c7abff" (UID: "665612c6-6f64-4e6e-a9d7-770665c7abff"). InnerVolumeSpecName "kube-api-access-6zwtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.600357 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-inventory" (OuterVolumeSpecName: "inventory") pod "665612c6-6f64-4e6e-a9d7-770665c7abff" (UID: "665612c6-6f64-4e6e-a9d7-770665c7abff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.609879 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "665612c6-6f64-4e6e-a9d7-770665c7abff" (UID: "665612c6-6f64-4e6e-a9d7-770665c7abff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.675345 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zwtx\" (UniqueName: \"kubernetes.io/projected/665612c6-6f64-4e6e-a9d7-770665c7abff-kube-api-access-6zwtx\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.675387 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:15 crc kubenswrapper[4771]: I0129 09:39:15.675399 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/665612c6-6f64-4e6e-a9d7-770665c7abff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.061217 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" event={"ID":"665612c6-6f64-4e6e-a9d7-770665c7abff","Type":"ContainerDied","Data":"879e3ddf36f42963e574725c40c2ee2124b9489da4ba46eb0a092ab8f49f6dd3"} Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.061260 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="879e3ddf36f42963e574725c40c2ee2124b9489da4ba46eb0a092ab8f49f6dd3" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.061267 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.145465 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d"] Jan 29 09:39:16 crc kubenswrapper[4771]: E0129 09:39:16.145889 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665612c6-6f64-4e6e-a9d7-770665c7abff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.145908 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="665612c6-6f64-4e6e-a9d7-770665c7abff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.146088 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="665612c6-6f64-4e6e-a9d7-770665c7abff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.146755 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.148892 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.149393 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.149442 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.149521 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.149564 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.149617 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.149918 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.151375 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.169895 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d"] Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.288948 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289018 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289086 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289109 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289136 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289206 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289295 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289315 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289480 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289578 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289612 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289661 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrjx\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-kube-api-access-dcrjx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.289685 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.391643 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.391974 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.392135 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.392243 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.392356 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.392469 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.392588 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.392690 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.392846 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcrjx\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-kube-api-access-dcrjx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.392966 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.393102 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.393234 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.393429 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.393573 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.395780 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.396442 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.396645 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.397352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.397430 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.398485 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.398494 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.399105 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.399322 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.399889 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.400389 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.401617 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.409490 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.411199 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcrjx\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-kube-api-access-dcrjx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.463744 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:16 crc kubenswrapper[4771]: I0129 09:39:16.964411 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d"] Jan 29 09:39:17 crc kubenswrapper[4771]: I0129 09:39:17.075737 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" event={"ID":"5fc93c8e-ca3e-403c-b42e-fea90628e728","Type":"ContainerStarted","Data":"4139275e590e7e8cd0625b62afd3948d29cd1c66115136a65a4041caf2ccb283"} Jan 29 09:39:18 crc kubenswrapper[4771]: I0129 09:39:18.091773 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" event={"ID":"5fc93c8e-ca3e-403c-b42e-fea90628e728","Type":"ContainerStarted","Data":"ffde6f7aaf0c58156a96cb2625407a8741aec15ba94297f515e586ae554e19eb"} Jan 29 09:39:18 crc kubenswrapper[4771]: I0129 09:39:18.116613 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" podStartSLOduration=1.5554242679999999 podStartE2EDuration="2.116578576s" podCreationTimestamp="2026-01-29 09:39:16 +0000 UTC" firstStartedPulling="2026-01-29 09:39:16.967565224 +0000 UTC m=+1977.090405451" lastFinishedPulling="2026-01-29 09:39:17.528719532 +0000 UTC m=+1977.651559759" observedRunningTime="2026-01-29 09:39:18.110280104 +0000 UTC m=+1978.233120341" watchObservedRunningTime="2026-01-29 09:39:18.116578576 +0000 UTC m=+1978.239418803" Jan 29 09:39:45 crc kubenswrapper[4771]: I0129 09:39:45.176338 4771 scope.go:117] "RemoveContainer" containerID="0d0843d6350b16602fdd5a703f23e156304278cdebbcc5b0c8ac81a153ef27ba" Jan 29 09:39:45 crc kubenswrapper[4771]: I0129 09:39:45.201086 4771 scope.go:117] "RemoveContainer" containerID="027301ee4b908d383c97b90cb84e6525ae31aee7987c0f7ceae48dd25e7a8f7d" Jan 29 09:39:45 crc kubenswrapper[4771]: I0129 09:39:45.221460 4771 scope.go:117] "RemoveContainer" containerID="3977ba9b556a7bb9bf02fd50498e6f2ef68dee87e04fcd8351eff959b08fed3c" Jan 29 09:39:45 crc kubenswrapper[4771]: I0129 09:39:45.265631 4771 scope.go:117] "RemoveContainer" containerID="dac7d9256e375fb5c36a33aff78fdf37b75e1c9de084d7fa22ab4bb43cf6d167" Jan 29 09:39:53 crc kubenswrapper[4771]: I0129 09:39:53.388826 4771 generic.go:334] "Generic (PLEG): container finished" podID="5fc93c8e-ca3e-403c-b42e-fea90628e728" containerID="ffde6f7aaf0c58156a96cb2625407a8741aec15ba94297f515e586ae554e19eb" exitCode=0 Jan 29 09:39:53 crc kubenswrapper[4771]: I0129 09:39:53.388935 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" event={"ID":"5fc93c8e-ca3e-403c-b42e-fea90628e728","Type":"ContainerDied","Data":"ffde6f7aaf0c58156a96cb2625407a8741aec15ba94297f515e586ae554e19eb"} Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.801118 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.862431 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-libvirt-combined-ca-bundle\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.862487 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-neutron-metadata-combined-ca-bundle\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.862548 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-ovn-default-certs-0\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.862579 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ssh-key-openstack-edpm-ipam\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.862615 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-bootstrap-combined-ca-bundle\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.862717 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ovn-combined-ca-bundle\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.862767 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-repo-setup-combined-ca-bundle\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.863671 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-telemetry-combined-ca-bundle\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.863853 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-nova-combined-ca-bundle\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.864127 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcrjx\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-kube-api-access-dcrjx\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.864208 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-inventory\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.864240 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.864271 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.864321 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"5fc93c8e-ca3e-403c-b42e-fea90628e728\" (UID: \"5fc93c8e-ca3e-403c-b42e-fea90628e728\") " Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.869774 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.870349 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.871639 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.871670 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.872129 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.872483 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.872491 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.873106 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.873327 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.873805 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.874833 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.875498 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-kube-api-access-dcrjx" (OuterVolumeSpecName: "kube-api-access-dcrjx") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "kube-api-access-dcrjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.894814 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.899595 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-inventory" (OuterVolumeSpecName: "inventory") pod "5fc93c8e-ca3e-403c-b42e-fea90628e728" (UID: "5fc93c8e-ca3e-403c-b42e-fea90628e728"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966805 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966847 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966867 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966882 4771 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966895 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966910 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966924 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966937 4771 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966951 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966963 4771 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966975 4771 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966988 4771 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.966999 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcrjx\" (UniqueName: \"kubernetes.io/projected/5fc93c8e-ca3e-403c-b42e-fea90628e728-kube-api-access-dcrjx\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:54 crc kubenswrapper[4771]: I0129 09:39:54.967011 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fc93c8e-ca3e-403c-b42e-fea90628e728-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.411304 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" event={"ID":"5fc93c8e-ca3e-403c-b42e-fea90628e728","Type":"ContainerDied","Data":"4139275e590e7e8cd0625b62afd3948d29cd1c66115136a65a4041caf2ccb283"} Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.411348 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4139275e590e7e8cd0625b62afd3948d29cd1c66115136a65a4041caf2ccb283" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.411442 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.512090 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx"] Jan 29 09:39:55 crc kubenswrapper[4771]: E0129 09:39:55.512582 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc93c8e-ca3e-403c-b42e-fea90628e728" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.512610 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc93c8e-ca3e-403c-b42e-fea90628e728" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.512850 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc93c8e-ca3e-403c-b42e-fea90628e728" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.513454 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.516957 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.517355 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.517524 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.517728 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.517829 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.522335 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx"] Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.579227 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn9x4\" (UniqueName: \"kubernetes.io/projected/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-kube-api-access-gn9x4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.580540 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.580718 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.580807 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.580967 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.682804 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn9x4\" (UniqueName: \"kubernetes.io/projected/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-kube-api-access-gn9x4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.682913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.682966 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.682993 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.683038 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.685160 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.700330 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.700360 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.700453 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.704650 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn9x4\" (UniqueName: \"kubernetes.io/projected/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-kube-api-access-gn9x4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-99hxx\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:55 crc kubenswrapper[4771]: I0129 09:39:55.845532 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:39:56 crc kubenswrapper[4771]: I0129 09:39:56.346992 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx"] Jan 29 09:39:56 crc kubenswrapper[4771]: I0129 09:39:56.419887 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" event={"ID":"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc","Type":"ContainerStarted","Data":"c0452d07a02fc066529cddc192039b42591a3b2e46b2428fdd721d6ef9e03356"} Jan 29 09:39:57 crc kubenswrapper[4771]: I0129 09:39:57.428534 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" event={"ID":"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc","Type":"ContainerStarted","Data":"38385f43757df003f51a007a735cd88b3f1468e484551730463765971fc77228"} Jan 29 09:40:44 crc kubenswrapper[4771]: I0129 09:40:44.271478 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:40:44 crc kubenswrapper[4771]: I0129 09:40:44.272112 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:40:57 crc kubenswrapper[4771]: I0129 09:40:57.970031 4771 generic.go:334] "Generic (PLEG): container finished" podID="b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" containerID="38385f43757df003f51a007a735cd88b3f1468e484551730463765971fc77228" exitCode=0 Jan 29 09:40:57 crc kubenswrapper[4771]: I0129 09:40:57.971504 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" event={"ID":"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc","Type":"ContainerDied","Data":"38385f43757df003f51a007a735cd88b3f1468e484551730463765971fc77228"} Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.447228 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.521634 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ssh-key-openstack-edpm-ipam\") pod \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.521726 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovn-combined-ca-bundle\") pod \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.521796 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovncontroller-config-0\") pod \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.521878 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-inventory\") pod \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.522784 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn9x4\" (UniqueName: \"kubernetes.io/projected/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-kube-api-access-gn9x4\") pod \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\" (UID: \"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc\") " Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.536359 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-kube-api-access-gn9x4" (OuterVolumeSpecName: "kube-api-access-gn9x4") pod "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" (UID: "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc"). InnerVolumeSpecName "kube-api-access-gn9x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.550784 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" (UID: "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.634477 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" (UID: "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.635493 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.635516 4771 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.635542 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn9x4\" (UniqueName: \"kubernetes.io/projected/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-kube-api-access-gn9x4\") on node \"crc\" DevicePath \"\"" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.651683 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-inventory" (OuterVolumeSpecName: "inventory") pod "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" (UID: "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.672765 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" (UID: "b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.737800 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.737995 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.987711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" event={"ID":"b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc","Type":"ContainerDied","Data":"c0452d07a02fc066529cddc192039b42591a3b2e46b2428fdd721d6ef9e03356"} Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.987734 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-99hxx" Jan 29 09:40:59 crc kubenswrapper[4771]: I0129 09:40:59.987749 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0452d07a02fc066529cddc192039b42591a3b2e46b2428fdd721d6ef9e03356" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.092462 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs"] Jan 29 09:41:00 crc kubenswrapper[4771]: E0129 09:41:00.092990 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.093015 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.093238 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.093890 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.095790 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.095973 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.096054 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.096181 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.096342 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.096783 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.105949 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs"] Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.246093 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftkzc\" (UniqueName: \"kubernetes.io/projected/262d9611-9da4-4ea4-82ab-abbcaab91a0d-kube-api-access-ftkzc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.246138 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.246171 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.246202 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.246346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.246402 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.349618 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.349717 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftkzc\" (UniqueName: \"kubernetes.io/projected/262d9611-9da4-4ea4-82ab-abbcaab91a0d-kube-api-access-ftkzc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.349763 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.349823 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.349871 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.350018 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.355160 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.360522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.372001 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.375126 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.378453 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.381927 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftkzc\" (UniqueName: \"kubernetes.io/projected/262d9611-9da4-4ea4-82ab-abbcaab91a0d-kube-api-access-ftkzc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.423052 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.972650 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs"] Jan 29 09:41:00 crc kubenswrapper[4771]: I0129 09:41:00.999336 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" event={"ID":"262d9611-9da4-4ea4-82ab-abbcaab91a0d","Type":"ContainerStarted","Data":"a7374fa514af8acbce0deb6bb3ddfed78b8ec4920af5ab39b36f77c792eda234"} Jan 29 09:41:02 crc kubenswrapper[4771]: I0129 09:41:02.007548 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" event={"ID":"262d9611-9da4-4ea4-82ab-abbcaab91a0d","Type":"ContainerStarted","Data":"26b5b9c1aa4059ce2efba0ed327ff2521a3b896ca39dcec76561bfcc9fd2de14"} Jan 29 09:41:02 crc kubenswrapper[4771]: I0129 09:41:02.031358 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" podStartSLOduration=1.5577620620000001 podStartE2EDuration="2.031333856s" podCreationTimestamp="2026-01-29 09:41:00 +0000 UTC" firstStartedPulling="2026-01-29 09:41:00.963331175 +0000 UTC m=+2081.086171402" lastFinishedPulling="2026-01-29 09:41:01.436902969 +0000 UTC m=+2081.559743196" observedRunningTime="2026-01-29 09:41:02.026457984 +0000 UTC m=+2082.149298281" watchObservedRunningTime="2026-01-29 09:41:02.031333856 +0000 UTC m=+2082.154174083" Jan 29 09:41:14 crc kubenswrapper[4771]: I0129 09:41:14.270938 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:41:14 crc kubenswrapper[4771]: I0129 09:41:14.271448 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:41:44 crc kubenswrapper[4771]: I0129 09:41:44.271502 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:41:44 crc kubenswrapper[4771]: I0129 09:41:44.272078 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:41:44 crc kubenswrapper[4771]: I0129 09:41:44.272122 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:41:44 crc kubenswrapper[4771]: I0129 09:41:44.272809 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31676f06b71cda577f4f9037e4996c95a39d3e20387b8d818c901817022dfe5a"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:41:44 crc kubenswrapper[4771]: I0129 09:41:44.272859 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://31676f06b71cda577f4f9037e4996c95a39d3e20387b8d818c901817022dfe5a" gracePeriod=600 Jan 29 09:41:45 crc kubenswrapper[4771]: I0129 09:41:45.398755 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="31676f06b71cda577f4f9037e4996c95a39d3e20387b8d818c901817022dfe5a" exitCode=0 Jan 29 09:41:45 crc kubenswrapper[4771]: I0129 09:41:45.398830 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"31676f06b71cda577f4f9037e4996c95a39d3e20387b8d818c901817022dfe5a"} Jan 29 09:41:45 crc kubenswrapper[4771]: I0129 09:41:45.399321 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d"} Jan 29 09:41:45 crc kubenswrapper[4771]: I0129 09:41:45.399366 4771 scope.go:117] "RemoveContainer" containerID="dd353bd99e73715e1fa86d4cd6a6849473d017ee80253bb80f4ae573f07fe5b4" Jan 29 09:41:48 crc kubenswrapper[4771]: I0129 09:41:48.426507 4771 generic.go:334] "Generic (PLEG): container finished" podID="262d9611-9da4-4ea4-82ab-abbcaab91a0d" containerID="26b5b9c1aa4059ce2efba0ed327ff2521a3b896ca39dcec76561bfcc9fd2de14" exitCode=0 Jan 29 09:41:48 crc kubenswrapper[4771]: I0129 09:41:48.426540 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" event={"ID":"262d9611-9da4-4ea4-82ab-abbcaab91a0d","Type":"ContainerDied","Data":"26b5b9c1aa4059ce2efba0ed327ff2521a3b896ca39dcec76561bfcc9fd2de14"} Jan 29 09:41:48 crc kubenswrapper[4771]: I0129 09:41:48.895452 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gdqfk"] Jan 29 09:41:48 crc kubenswrapper[4771]: I0129 09:41:48.897709 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:48 crc kubenswrapper[4771]: I0129 09:41:48.912979 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdqfk"] Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.027670 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l86zc\" (UniqueName: \"kubernetes.io/projected/c8e6ec83-22dd-47ba-8258-5c99128b041d-kube-api-access-l86zc\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.027801 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-utilities\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.027920 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-catalog-content\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.129853 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-utilities\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.129946 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-catalog-content\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.130076 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l86zc\" (UniqueName: \"kubernetes.io/projected/c8e6ec83-22dd-47ba-8258-5c99128b041d-kube-api-access-l86zc\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.130319 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-utilities\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.130540 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-catalog-content\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.158190 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l86zc\" (UniqueName: \"kubernetes.io/projected/c8e6ec83-22dd-47ba-8258-5c99128b041d-kube-api-access-l86zc\") pod \"redhat-operators-gdqfk\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.225827 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.745204 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdqfk"] Jan 29 09:41:49 crc kubenswrapper[4771]: I0129 09:41:49.958429 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.046574 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.047072 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftkzc\" (UniqueName: \"kubernetes.io/projected/262d9611-9da4-4ea4-82ab-abbcaab91a0d-kube-api-access-ftkzc\") pod \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.047126 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-nova-metadata-neutron-config-0\") pod \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.047145 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-inventory\") pod \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.047236 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-metadata-combined-ca-bundle\") pod \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.047321 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-ssh-key-openstack-edpm-ipam\") pod \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\" (UID: \"262d9611-9da4-4ea4-82ab-abbcaab91a0d\") " Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.064049 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "262d9611-9da4-4ea4-82ab-abbcaab91a0d" (UID: "262d9611-9da4-4ea4-82ab-abbcaab91a0d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.064329 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262d9611-9da4-4ea4-82ab-abbcaab91a0d-kube-api-access-ftkzc" (OuterVolumeSpecName: "kube-api-access-ftkzc") pod "262d9611-9da4-4ea4-82ab-abbcaab91a0d" (UID: "262d9611-9da4-4ea4-82ab-abbcaab91a0d"). InnerVolumeSpecName "kube-api-access-ftkzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.092732 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "262d9611-9da4-4ea4-82ab-abbcaab91a0d" (UID: "262d9611-9da4-4ea4-82ab-abbcaab91a0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.108995 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "262d9611-9da4-4ea4-82ab-abbcaab91a0d" (UID: "262d9611-9da4-4ea4-82ab-abbcaab91a0d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.109498 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-inventory" (OuterVolumeSpecName: "inventory") pod "262d9611-9da4-4ea4-82ab-abbcaab91a0d" (UID: "262d9611-9da4-4ea4-82ab-abbcaab91a0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.111232 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "262d9611-9da4-4ea4-82ab-abbcaab91a0d" (UID: "262d9611-9da4-4ea4-82ab-abbcaab91a0d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.150352 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.162857 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftkzc\" (UniqueName: \"kubernetes.io/projected/262d9611-9da4-4ea4-82ab-abbcaab91a0d-kube-api-access-ftkzc\") on node \"crc\" DevicePath \"\"" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.162901 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.162913 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.162925 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.162937 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/262d9611-9da4-4ea4-82ab-abbcaab91a0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.451293 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" event={"ID":"262d9611-9da4-4ea4-82ab-abbcaab91a0d","Type":"ContainerDied","Data":"a7374fa514af8acbce0deb6bb3ddfed78b8ec4920af5ab39b36f77c792eda234"} Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.451586 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7374fa514af8acbce0deb6bb3ddfed78b8ec4920af5ab39b36f77c792eda234" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.451308 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.454546 4771 generic.go:334] "Generic (PLEG): container finished" podID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerID="0674ba596b31ceb036f0c421e80442ea8adf13334363487e2fdbb62415f3ed15" exitCode=0 Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.454588 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdqfk" event={"ID":"c8e6ec83-22dd-47ba-8258-5c99128b041d","Type":"ContainerDied","Data":"0674ba596b31ceb036f0c421e80442ea8adf13334363487e2fdbb62415f3ed15"} Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.454613 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdqfk" event={"ID":"c8e6ec83-22dd-47ba-8258-5c99128b041d","Type":"ContainerStarted","Data":"b9e76506dbad3192fb7a3f2c73404599b3b03b55dc7af35525a5bebc2d51582b"} Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.542751 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th"] Jan 29 09:41:50 crc kubenswrapper[4771]: E0129 09:41:50.543223 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262d9611-9da4-4ea4-82ab-abbcaab91a0d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.543247 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="262d9611-9da4-4ea4-82ab-abbcaab91a0d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.543427 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="262d9611-9da4-4ea4-82ab-abbcaab91a0d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.544094 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.547148 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.547200 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.547408 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.547567 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.548450 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.567834 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th"] Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.672455 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.672525 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.672728 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.672773 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9x6v\" (UniqueName: \"kubernetes.io/projected/8cb7a0bd-4a49-4b9c-ae51-86219526db00-kube-api-access-f9x6v\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.672819 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.774491 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.774579 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.774602 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.774721 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.774744 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9x6v\" (UniqueName: \"kubernetes.io/projected/8cb7a0bd-4a49-4b9c-ae51-86219526db00-kube-api-access-f9x6v\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.780653 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.781019 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.781114 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.783146 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.791357 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9x6v\" (UniqueName: \"kubernetes.io/projected/8cb7a0bd-4a49-4b9c-ae51-86219526db00-kube-api-access-f9x6v\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tw2th\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:50 crc kubenswrapper[4771]: I0129 09:41:50.868674 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:41:51 crc kubenswrapper[4771]: I0129 09:41:51.378586 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th"] Jan 29 09:41:51 crc kubenswrapper[4771]: W0129 09:41:51.383543 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cb7a0bd_4a49_4b9c_ae51_86219526db00.slice/crio-a5fc3f7b88ace498fa458486309c55b691cffb2716af842d6512e7b3e6c722bc WatchSource:0}: Error finding container a5fc3f7b88ace498fa458486309c55b691cffb2716af842d6512e7b3e6c722bc: Status 404 returned error can't find the container with id a5fc3f7b88ace498fa458486309c55b691cffb2716af842d6512e7b3e6c722bc Jan 29 09:41:51 crc kubenswrapper[4771]: I0129 09:41:51.466552 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" event={"ID":"8cb7a0bd-4a49-4b9c-ae51-86219526db00","Type":"ContainerStarted","Data":"a5fc3f7b88ace498fa458486309c55b691cffb2716af842d6512e7b3e6c722bc"} Jan 29 09:41:52 crc kubenswrapper[4771]: I0129 09:41:52.479218 4771 generic.go:334] "Generic (PLEG): container finished" podID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerID="79e614bbfcff19f7dc2ad72b07b38ec5b68738ba3e6fd2b360ce446f7bf22320" exitCode=0 Jan 29 09:41:52 crc kubenswrapper[4771]: I0129 09:41:52.479274 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdqfk" event={"ID":"c8e6ec83-22dd-47ba-8258-5c99128b041d","Type":"ContainerDied","Data":"79e614bbfcff19f7dc2ad72b07b38ec5b68738ba3e6fd2b360ce446f7bf22320"} Jan 29 09:41:52 crc kubenswrapper[4771]: I0129 09:41:52.481893 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" event={"ID":"8cb7a0bd-4a49-4b9c-ae51-86219526db00","Type":"ContainerStarted","Data":"aeb71d2639cced5add2931152208181497e7c63c3e8df9f6c2ec252b3160d9fd"} Jan 29 09:41:53 crc kubenswrapper[4771]: I0129 09:41:53.492968 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdqfk" event={"ID":"c8e6ec83-22dd-47ba-8258-5c99128b041d","Type":"ContainerStarted","Data":"a11da0e1044f1470f3511ace1f11c521a5be6661e4627c3b131cd8df9437d6a4"} Jan 29 09:41:53 crc kubenswrapper[4771]: I0129 09:41:53.511442 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gdqfk" podStartSLOduration=3.086234648 podStartE2EDuration="5.51141977s" podCreationTimestamp="2026-01-29 09:41:48 +0000 UTC" firstStartedPulling="2026-01-29 09:41:50.456784623 +0000 UTC m=+2130.579624850" lastFinishedPulling="2026-01-29 09:41:52.881969735 +0000 UTC m=+2133.004809972" observedRunningTime="2026-01-29 09:41:53.507676169 +0000 UTC m=+2133.630516406" watchObservedRunningTime="2026-01-29 09:41:53.51141977 +0000 UTC m=+2133.634260007" Jan 29 09:41:53 crc kubenswrapper[4771]: I0129 09:41:53.516479 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" podStartSLOduration=3.06123589 podStartE2EDuration="3.516465007s" podCreationTimestamp="2026-01-29 09:41:50 +0000 UTC" firstStartedPulling="2026-01-29 09:41:51.517205519 +0000 UTC m=+2131.640045746" lastFinishedPulling="2026-01-29 09:41:51.972434636 +0000 UTC m=+2132.095274863" observedRunningTime="2026-01-29 09:41:52.517021213 +0000 UTC m=+2132.639861440" watchObservedRunningTime="2026-01-29 09:41:53.516465007 +0000 UTC m=+2133.639305234" Jan 29 09:41:59 crc kubenswrapper[4771]: I0129 09:41:59.225940 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:59 crc kubenswrapper[4771]: I0129 09:41:59.227051 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:59 crc kubenswrapper[4771]: I0129 09:41:59.275643 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:59 crc kubenswrapper[4771]: I0129 09:41:59.608141 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:41:59 crc kubenswrapper[4771]: I0129 09:41:59.669081 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gdqfk"] Jan 29 09:42:01 crc kubenswrapper[4771]: I0129 09:42:01.567562 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gdqfk" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerName="registry-server" containerID="cri-o://a11da0e1044f1470f3511ace1f11c521a5be6661e4627c3b131cd8df9437d6a4" gracePeriod=2 Jan 29 09:42:02 crc kubenswrapper[4771]: I0129 09:42:02.578343 4771 generic.go:334] "Generic (PLEG): container finished" podID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerID="a11da0e1044f1470f3511ace1f11c521a5be6661e4627c3b131cd8df9437d6a4" exitCode=0 Jan 29 09:42:02 crc kubenswrapper[4771]: I0129 09:42:02.578425 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdqfk" event={"ID":"c8e6ec83-22dd-47ba-8258-5c99128b041d","Type":"ContainerDied","Data":"a11da0e1044f1470f3511ace1f11c521a5be6661e4627c3b131cd8df9437d6a4"} Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.003240 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.151030 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l86zc\" (UniqueName: \"kubernetes.io/projected/c8e6ec83-22dd-47ba-8258-5c99128b041d-kube-api-access-l86zc\") pod \"c8e6ec83-22dd-47ba-8258-5c99128b041d\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.151135 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-utilities\") pod \"c8e6ec83-22dd-47ba-8258-5c99128b041d\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.151251 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-catalog-content\") pod \"c8e6ec83-22dd-47ba-8258-5c99128b041d\" (UID: \"c8e6ec83-22dd-47ba-8258-5c99128b041d\") " Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.152739 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-utilities" (OuterVolumeSpecName: "utilities") pod "c8e6ec83-22dd-47ba-8258-5c99128b041d" (UID: "c8e6ec83-22dd-47ba-8258-5c99128b041d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.160005 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e6ec83-22dd-47ba-8258-5c99128b041d-kube-api-access-l86zc" (OuterVolumeSpecName: "kube-api-access-l86zc") pod "c8e6ec83-22dd-47ba-8258-5c99128b041d" (UID: "c8e6ec83-22dd-47ba-8258-5c99128b041d"). InnerVolumeSpecName "kube-api-access-l86zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.253587 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l86zc\" (UniqueName: \"kubernetes.io/projected/c8e6ec83-22dd-47ba-8258-5c99128b041d-kube-api-access-l86zc\") on node \"crc\" DevicePath \"\"" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.253617 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.297143 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8e6ec83-22dd-47ba-8258-5c99128b041d" (UID: "c8e6ec83-22dd-47ba-8258-5c99128b041d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.355756 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6ec83-22dd-47ba-8258-5c99128b041d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.593425 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdqfk" event={"ID":"c8e6ec83-22dd-47ba-8258-5c99128b041d","Type":"ContainerDied","Data":"b9e76506dbad3192fb7a3f2c73404599b3b03b55dc7af35525a5bebc2d51582b"} Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.594876 4771 scope.go:117] "RemoveContainer" containerID="a11da0e1044f1470f3511ace1f11c521a5be6661e4627c3b131cd8df9437d6a4" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.593644 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdqfk" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.616593 4771 scope.go:117] "RemoveContainer" containerID="79e614bbfcff19f7dc2ad72b07b38ec5b68738ba3e6fd2b360ce446f7bf22320" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.651285 4771 scope.go:117] "RemoveContainer" containerID="0674ba596b31ceb036f0c421e80442ea8adf13334363487e2fdbb62415f3ed15" Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.663139 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gdqfk"] Jan 29 09:42:03 crc kubenswrapper[4771]: I0129 09:42:03.693194 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gdqfk"] Jan 29 09:42:04 crc kubenswrapper[4771]: I0129 09:42:04.852106 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" path="/var/lib/kubelet/pods/c8e6ec83-22dd-47ba-8258-5c99128b041d/volumes" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.282186 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dpzmf"] Jan 29 09:42:38 crc kubenswrapper[4771]: E0129 09:42:38.283623 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerName="extract-content" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.283653 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerName="extract-content" Jan 29 09:42:38 crc kubenswrapper[4771]: E0129 09:42:38.283772 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerName="extract-utilities" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.283794 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerName="extract-utilities" Jan 29 09:42:38 crc kubenswrapper[4771]: E0129 09:42:38.283827 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerName="registry-server" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.283845 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerName="registry-server" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.284380 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e6ec83-22dd-47ba-8258-5c99128b041d" containerName="registry-server" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.287828 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.293331 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpzmf"] Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.482990 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae2d3e3-66ee-4633-adb9-195a80952a82-utilities\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.483499 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6k27\" (UniqueName: \"kubernetes.io/projected/aae2d3e3-66ee-4633-adb9-195a80952a82-kube-api-access-v6k27\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.483565 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae2d3e3-66ee-4633-adb9-195a80952a82-catalog-content\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.585584 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6k27\" (UniqueName: \"kubernetes.io/projected/aae2d3e3-66ee-4633-adb9-195a80952a82-kube-api-access-v6k27\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.585655 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae2d3e3-66ee-4633-adb9-195a80952a82-catalog-content\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.585685 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae2d3e3-66ee-4633-adb9-195a80952a82-utilities\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.586437 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae2d3e3-66ee-4633-adb9-195a80952a82-utilities\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.586527 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae2d3e3-66ee-4633-adb9-195a80952a82-catalog-content\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.604888 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6k27\" (UniqueName: \"kubernetes.io/projected/aae2d3e3-66ee-4633-adb9-195a80952a82-kube-api-access-v6k27\") pod \"certified-operators-dpzmf\" (UID: \"aae2d3e3-66ee-4633-adb9-195a80952a82\") " pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:38 crc kubenswrapper[4771]: I0129 09:42:38.618997 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:39 crc kubenswrapper[4771]: I0129 09:42:39.127556 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpzmf"] Jan 29 09:42:39 crc kubenswrapper[4771]: I0129 09:42:39.964789 4771 generic.go:334] "Generic (PLEG): container finished" podID="aae2d3e3-66ee-4633-adb9-195a80952a82" containerID="025694cb2167a5e39c41326d72d5c48ab40d4ad8705fb892a32baa0ad4541f80" exitCode=0 Jan 29 09:42:39 crc kubenswrapper[4771]: I0129 09:42:39.964890 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpzmf" event={"ID":"aae2d3e3-66ee-4633-adb9-195a80952a82","Type":"ContainerDied","Data":"025694cb2167a5e39c41326d72d5c48ab40d4ad8705fb892a32baa0ad4541f80"} Jan 29 09:42:39 crc kubenswrapper[4771]: I0129 09:42:39.965103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpzmf" event={"ID":"aae2d3e3-66ee-4633-adb9-195a80952a82","Type":"ContainerStarted","Data":"436e65946a9ce8ecb02c01867721f980a7d3840a9592441c71c028075e2e5564"} Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.674080 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xcjt7"] Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.676135 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.689823 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xcjt7"] Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.729054 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-utilities\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.729141 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-catalog-content\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.729190 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hkl\" (UniqueName: \"kubernetes.io/projected/ca6741a6-70b8-4723-86d7-09d6ecd08476-kube-api-access-k5hkl\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.830432 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-utilities\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.830514 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-catalog-content\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.830561 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hkl\" (UniqueName: \"kubernetes.io/projected/ca6741a6-70b8-4723-86d7-09d6ecd08476-kube-api-access-k5hkl\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.830946 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-utilities\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.831174 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-catalog-content\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:40 crc kubenswrapper[4771]: I0129 09:42:40.875472 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hkl\" (UniqueName: \"kubernetes.io/projected/ca6741a6-70b8-4723-86d7-09d6ecd08476-kube-api-access-k5hkl\") pod \"community-operators-xcjt7\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:41 crc kubenswrapper[4771]: I0129 09:42:41.003246 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:41 crc kubenswrapper[4771]: I0129 09:42:41.534727 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xcjt7"] Jan 29 09:42:41 crc kubenswrapper[4771]: W0129 09:42:41.539637 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca6741a6_70b8_4723_86d7_09d6ecd08476.slice/crio-d0ca0be563a6881b027bf83cf71e6ac6e508ec4b9d7e98a4a0682b5d1bedcdd4 WatchSource:0}: Error finding container d0ca0be563a6881b027bf83cf71e6ac6e508ec4b9d7e98a4a0682b5d1bedcdd4: Status 404 returned error can't find the container with id d0ca0be563a6881b027bf83cf71e6ac6e508ec4b9d7e98a4a0682b5d1bedcdd4 Jan 29 09:42:41 crc kubenswrapper[4771]: I0129 09:42:41.988927 4771 generic.go:334] "Generic (PLEG): container finished" podID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerID="c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df" exitCode=0 Jan 29 09:42:41 crc kubenswrapper[4771]: I0129 09:42:41.988982 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcjt7" event={"ID":"ca6741a6-70b8-4723-86d7-09d6ecd08476","Type":"ContainerDied","Data":"c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df"} Jan 29 09:42:41 crc kubenswrapper[4771]: I0129 09:42:41.989235 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcjt7" event={"ID":"ca6741a6-70b8-4723-86d7-09d6ecd08476","Type":"ContainerStarted","Data":"d0ca0be563a6881b027bf83cf71e6ac6e508ec4b9d7e98a4a0682b5d1bedcdd4"} Jan 29 09:42:45 crc kubenswrapper[4771]: I0129 09:42:45.022672 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcjt7" event={"ID":"ca6741a6-70b8-4723-86d7-09d6ecd08476","Type":"ContainerStarted","Data":"353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38"} Jan 29 09:42:45 crc kubenswrapper[4771]: I0129 09:42:45.034357 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpzmf" event={"ID":"aae2d3e3-66ee-4633-adb9-195a80952a82","Type":"ContainerStarted","Data":"53caa1504a0b950dd0ec0b15d04d76f76e5b0a5ccffd6dc17ddddfc4eff8674e"} Jan 29 09:42:46 crc kubenswrapper[4771]: I0129 09:42:46.048409 4771 generic.go:334] "Generic (PLEG): container finished" podID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerID="353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38" exitCode=0 Jan 29 09:42:46 crc kubenswrapper[4771]: I0129 09:42:46.048511 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcjt7" event={"ID":"ca6741a6-70b8-4723-86d7-09d6ecd08476","Type":"ContainerDied","Data":"353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38"} Jan 29 09:42:46 crc kubenswrapper[4771]: I0129 09:42:46.053878 4771 generic.go:334] "Generic (PLEG): container finished" podID="aae2d3e3-66ee-4633-adb9-195a80952a82" containerID="53caa1504a0b950dd0ec0b15d04d76f76e5b0a5ccffd6dc17ddddfc4eff8674e" exitCode=0 Jan 29 09:42:46 crc kubenswrapper[4771]: I0129 09:42:46.053914 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpzmf" event={"ID":"aae2d3e3-66ee-4633-adb9-195a80952a82","Type":"ContainerDied","Data":"53caa1504a0b950dd0ec0b15d04d76f76e5b0a5ccffd6dc17ddddfc4eff8674e"} Jan 29 09:42:48 crc kubenswrapper[4771]: I0129 09:42:48.074877 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcjt7" event={"ID":"ca6741a6-70b8-4723-86d7-09d6ecd08476","Type":"ContainerStarted","Data":"95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805"} Jan 29 09:42:49 crc kubenswrapper[4771]: I0129 09:42:49.087605 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpzmf" event={"ID":"aae2d3e3-66ee-4633-adb9-195a80952a82","Type":"ContainerStarted","Data":"de01392c7ce3949a5f79686f84d057da0d98f759df67b5c46d7639dff5bf2f49"} Jan 29 09:42:49 crc kubenswrapper[4771]: I0129 09:42:49.105669 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dpzmf" podStartSLOduration=2.9365248900000003 podStartE2EDuration="11.105651465s" podCreationTimestamp="2026-01-29 09:42:38 +0000 UTC" firstStartedPulling="2026-01-29 09:42:39.966344148 +0000 UTC m=+2180.089184375" lastFinishedPulling="2026-01-29 09:42:48.135470723 +0000 UTC m=+2188.258310950" observedRunningTime="2026-01-29 09:42:49.100881186 +0000 UTC m=+2189.223721413" watchObservedRunningTime="2026-01-29 09:42:49.105651465 +0000 UTC m=+2189.228491702" Jan 29 09:42:49 crc kubenswrapper[4771]: I0129 09:42:49.126895 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xcjt7" podStartSLOduration=3.540916706 podStartE2EDuration="9.12687413s" podCreationTimestamp="2026-01-29 09:42:40 +0000 UTC" firstStartedPulling="2026-01-29 09:42:41.990890312 +0000 UTC m=+2182.113730539" lastFinishedPulling="2026-01-29 09:42:47.576847736 +0000 UTC m=+2187.699687963" observedRunningTime="2026-01-29 09:42:49.11911195 +0000 UTC m=+2189.241952197" watchObservedRunningTime="2026-01-29 09:42:49.12687413 +0000 UTC m=+2189.249714357" Jan 29 09:42:51 crc kubenswrapper[4771]: I0129 09:42:51.003368 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:51 crc kubenswrapper[4771]: I0129 09:42:51.003798 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:42:52 crc kubenswrapper[4771]: I0129 09:42:52.054043 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xcjt7" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="registry-server" probeResult="failure" output=< Jan 29 09:42:52 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:42:52 crc kubenswrapper[4771]: > Jan 29 09:42:58 crc kubenswrapper[4771]: I0129 09:42:58.619475 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:58 crc kubenswrapper[4771]: I0129 09:42:58.620168 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:58 crc kubenswrapper[4771]: I0129 09:42:58.691635 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:59 crc kubenswrapper[4771]: I0129 09:42:59.236872 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dpzmf" Jan 29 09:42:59 crc kubenswrapper[4771]: I0129 09:42:59.337513 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpzmf"] Jan 29 09:42:59 crc kubenswrapper[4771]: I0129 09:42:59.380813 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qdmm"] Jan 29 09:42:59 crc kubenswrapper[4771]: I0129 09:42:59.381067 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4qdmm" podUID="2093c052-f157-4807-9420-92386e715703" containerName="registry-server" containerID="cri-o://caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231" gracePeriod=2 Jan 29 09:42:59 crc kubenswrapper[4771]: E0129 09:42:59.695314 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231 is running failed: container process not found" containerID="caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:42:59 crc kubenswrapper[4771]: E0129 09:42:59.696005 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231 is running failed: container process not found" containerID="caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:42:59 crc kubenswrapper[4771]: E0129 09:42:59.696467 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231 is running failed: container process not found" containerID="caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 09:42:59 crc kubenswrapper[4771]: E0129 09:42:59.696511 4771 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-4qdmm" podUID="2093c052-f157-4807-9420-92386e715703" containerName="registry-server" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.074669 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.202084 4771 generic.go:334] "Generic (PLEG): container finished" podID="2093c052-f157-4807-9420-92386e715703" containerID="caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231" exitCode=0 Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.202150 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qdmm" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.202148 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qdmm" event={"ID":"2093c052-f157-4807-9420-92386e715703","Type":"ContainerDied","Data":"caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231"} Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.202466 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qdmm" event={"ID":"2093c052-f157-4807-9420-92386e715703","Type":"ContainerDied","Data":"5fe52f020935d5b0bd10499102b43b6d48a334c93960a8ee59b5b9aad6a51961"} Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.202485 4771 scope.go:117] "RemoveContainer" containerID="caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.202922 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-catalog-content\") pod \"2093c052-f157-4807-9420-92386e715703\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.202970 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8prmz\" (UniqueName: \"kubernetes.io/projected/2093c052-f157-4807-9420-92386e715703-kube-api-access-8prmz\") pod \"2093c052-f157-4807-9420-92386e715703\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.203038 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-utilities\") pod \"2093c052-f157-4807-9420-92386e715703\" (UID: \"2093c052-f157-4807-9420-92386e715703\") " Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.204899 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-utilities" (OuterVolumeSpecName: "utilities") pod "2093c052-f157-4807-9420-92386e715703" (UID: "2093c052-f157-4807-9420-92386e715703"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.208035 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2093c052-f157-4807-9420-92386e715703-kube-api-access-8prmz" (OuterVolumeSpecName: "kube-api-access-8prmz") pod "2093c052-f157-4807-9420-92386e715703" (UID: "2093c052-f157-4807-9420-92386e715703"). InnerVolumeSpecName "kube-api-access-8prmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.263466 4771 scope.go:117] "RemoveContainer" containerID="a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.284559 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2093c052-f157-4807-9420-92386e715703" (UID: "2093c052-f157-4807-9420-92386e715703"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.301319 4771 scope.go:117] "RemoveContainer" containerID="d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.305931 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.305961 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8prmz\" (UniqueName: \"kubernetes.io/projected/2093c052-f157-4807-9420-92386e715703-kube-api-access-8prmz\") on node \"crc\" DevicePath \"\"" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.305972 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2093c052-f157-4807-9420-92386e715703-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.348128 4771 scope.go:117] "RemoveContainer" containerID="caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231" Jan 29 09:43:00 crc kubenswrapper[4771]: E0129 09:43:00.348579 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231\": container with ID starting with caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231 not found: ID does not exist" containerID="caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.348625 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231"} err="failed to get container status \"caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231\": rpc error: code = NotFound desc = could not find container \"caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231\": container with ID starting with caae28d2e98829b4103cadb787bbbfdc521ce42267951965d411e50c1bd87231 not found: ID does not exist" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.348651 4771 scope.go:117] "RemoveContainer" containerID="a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a" Jan 29 09:43:00 crc kubenswrapper[4771]: E0129 09:43:00.349171 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a\": container with ID starting with a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a not found: ID does not exist" containerID="a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.349195 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a"} err="failed to get container status \"a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a\": rpc error: code = NotFound desc = could not find container \"a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a\": container with ID starting with a6ded735a1eb969bc9921998d20ef685d6870ea18d6aa5c3b3437dc054e11f5a not found: ID does not exist" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.349208 4771 scope.go:117] "RemoveContainer" containerID="d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c" Jan 29 09:43:00 crc kubenswrapper[4771]: E0129 09:43:00.349411 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c\": container with ID starting with d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c not found: ID does not exist" containerID="d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.349435 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c"} err="failed to get container status \"d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c\": rpc error: code = NotFound desc = could not find container \"d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c\": container with ID starting with d734d1d01d9440e17b0492ffd27ed63aeae41608d0f8bbfee1fe0227763ad81c not found: ID does not exist" Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.567399 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qdmm"] Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.574497 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4qdmm"] Jan 29 09:43:00 crc kubenswrapper[4771]: I0129 09:43:00.852313 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2093c052-f157-4807-9420-92386e715703" path="/var/lib/kubelet/pods/2093c052-f157-4807-9420-92386e715703/volumes" Jan 29 09:43:01 crc kubenswrapper[4771]: I0129 09:43:01.058974 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:43:01 crc kubenswrapper[4771]: I0129 09:43:01.127361 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.334398 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xcjt7"] Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.334647 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xcjt7" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="registry-server" containerID="cri-o://95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805" gracePeriod=2 Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.825555 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.977285 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-utilities\") pod \"ca6741a6-70b8-4723-86d7-09d6ecd08476\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.978089 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-catalog-content\") pod \"ca6741a6-70b8-4723-86d7-09d6ecd08476\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.978249 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5hkl\" (UniqueName: \"kubernetes.io/projected/ca6741a6-70b8-4723-86d7-09d6ecd08476-kube-api-access-k5hkl\") pod \"ca6741a6-70b8-4723-86d7-09d6ecd08476\" (UID: \"ca6741a6-70b8-4723-86d7-09d6ecd08476\") " Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.978467 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-utilities" (OuterVolumeSpecName: "utilities") pod "ca6741a6-70b8-4723-86d7-09d6ecd08476" (UID: "ca6741a6-70b8-4723-86d7-09d6ecd08476"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.979010 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:43:03 crc kubenswrapper[4771]: I0129 09:43:03.983302 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca6741a6-70b8-4723-86d7-09d6ecd08476-kube-api-access-k5hkl" (OuterVolumeSpecName: "kube-api-access-k5hkl") pod "ca6741a6-70b8-4723-86d7-09d6ecd08476" (UID: "ca6741a6-70b8-4723-86d7-09d6ecd08476"). InnerVolumeSpecName "kube-api-access-k5hkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.025301 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca6741a6-70b8-4723-86d7-09d6ecd08476" (UID: "ca6741a6-70b8-4723-86d7-09d6ecd08476"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.080258 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6741a6-70b8-4723-86d7-09d6ecd08476-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.080295 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5hkl\" (UniqueName: \"kubernetes.io/projected/ca6741a6-70b8-4723-86d7-09d6ecd08476-kube-api-access-k5hkl\") on node \"crc\" DevicePath \"\"" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.240242 4771 generic.go:334] "Generic (PLEG): container finished" podID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerID="95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805" exitCode=0 Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.240294 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcjt7" event={"ID":"ca6741a6-70b8-4723-86d7-09d6ecd08476","Type":"ContainerDied","Data":"95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805"} Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.240349 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcjt7" event={"ID":"ca6741a6-70b8-4723-86d7-09d6ecd08476","Type":"ContainerDied","Data":"d0ca0be563a6881b027bf83cf71e6ac6e508ec4b9d7e98a4a0682b5d1bedcdd4"} Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.240346 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcjt7" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.240371 4771 scope.go:117] "RemoveContainer" containerID="95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.281812 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xcjt7"] Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.282181 4771 scope.go:117] "RemoveContainer" containerID="353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.289990 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xcjt7"] Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.302586 4771 scope.go:117] "RemoveContainer" containerID="c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.347135 4771 scope.go:117] "RemoveContainer" containerID="95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805" Jan 29 09:43:04 crc kubenswrapper[4771]: E0129 09:43:04.347634 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805\": container with ID starting with 95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805 not found: ID does not exist" containerID="95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.347675 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805"} err="failed to get container status \"95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805\": rpc error: code = NotFound desc = could not find container \"95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805\": container with ID starting with 95f5787e203fb0994824b62a09e960b0c93bd0f69c826a30d6bb525a5c50f805 not found: ID does not exist" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.347724 4771 scope.go:117] "RemoveContainer" containerID="353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38" Jan 29 09:43:04 crc kubenswrapper[4771]: E0129 09:43:04.348120 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38\": container with ID starting with 353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38 not found: ID does not exist" containerID="353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.348145 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38"} err="failed to get container status \"353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38\": rpc error: code = NotFound desc = could not find container \"353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38\": container with ID starting with 353220f3025b6753aa0e313a166a095f6bb1d76544a709212d37778bdb026e38 not found: ID does not exist" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.348164 4771 scope.go:117] "RemoveContainer" containerID="c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df" Jan 29 09:43:04 crc kubenswrapper[4771]: E0129 09:43:04.348430 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df\": container with ID starting with c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df not found: ID does not exist" containerID="c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.348461 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df"} err="failed to get container status \"c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df\": rpc error: code = NotFound desc = could not find container \"c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df\": container with ID starting with c651ac4603e5d6983ec44e0d392db4f629ae3c2d9e8100944e053c832e5872df not found: ID does not exist" Jan 29 09:43:04 crc kubenswrapper[4771]: I0129 09:43:04.851290 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" path="/var/lib/kubelet/pods/ca6741a6-70b8-4723-86d7-09d6ecd08476/volumes" Jan 29 09:43:44 crc kubenswrapper[4771]: I0129 09:43:44.272337 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:43:44 crc kubenswrapper[4771]: I0129 09:43:44.273114 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.699450 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l857m"] Jan 29 09:43:59 crc kubenswrapper[4771]: E0129 09:43:59.700504 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2093c052-f157-4807-9420-92386e715703" containerName="extract-content" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.700523 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2093c052-f157-4807-9420-92386e715703" containerName="extract-content" Jan 29 09:43:59 crc kubenswrapper[4771]: E0129 09:43:59.700541 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="registry-server" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.700548 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="registry-server" Jan 29 09:43:59 crc kubenswrapper[4771]: E0129 09:43:59.700562 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="extract-utilities" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.700570 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="extract-utilities" Jan 29 09:43:59 crc kubenswrapper[4771]: E0129 09:43:59.700587 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2093c052-f157-4807-9420-92386e715703" containerName="extract-utilities" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.700593 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2093c052-f157-4807-9420-92386e715703" containerName="extract-utilities" Jan 29 09:43:59 crc kubenswrapper[4771]: E0129 09:43:59.700625 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2093c052-f157-4807-9420-92386e715703" containerName="registry-server" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.700633 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2093c052-f157-4807-9420-92386e715703" containerName="registry-server" Jan 29 09:43:59 crc kubenswrapper[4771]: E0129 09:43:59.700647 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="extract-content" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.700654 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="extract-content" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.700911 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca6741a6-70b8-4723-86d7-09d6ecd08476" containerName="registry-server" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.700933 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2093c052-f157-4807-9420-92386e715703" containerName="registry-server" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.702498 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.728555 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l857m"] Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.798163 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-catalog-content\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.798372 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzk87\" (UniqueName: \"kubernetes.io/projected/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-kube-api-access-dzk87\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.798430 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-utilities\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.900473 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-catalog-content\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.900641 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzk87\" (UniqueName: \"kubernetes.io/projected/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-kube-api-access-dzk87\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.900721 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-utilities\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.901083 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-utilities\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.901192 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-catalog-content\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:43:59 crc kubenswrapper[4771]: I0129 09:43:59.923668 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzk87\" (UniqueName: \"kubernetes.io/projected/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-kube-api-access-dzk87\") pod \"redhat-marketplace-l857m\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:44:00 crc kubenswrapper[4771]: I0129 09:44:00.020321 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:44:00 crc kubenswrapper[4771]: I0129 09:44:00.485242 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l857m"] Jan 29 09:44:00 crc kubenswrapper[4771]: I0129 09:44:00.802671 4771 generic.go:334] "Generic (PLEG): container finished" podID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerID="9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495" exitCode=0 Jan 29 09:44:00 crc kubenswrapper[4771]: I0129 09:44:00.802782 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l857m" event={"ID":"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d","Type":"ContainerDied","Data":"9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495"} Jan 29 09:44:00 crc kubenswrapper[4771]: I0129 09:44:00.802827 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l857m" event={"ID":"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d","Type":"ContainerStarted","Data":"f5c6323c43b022ad70a74f95c14540dd6c1b124129e25c979a214d984f6c7e54"} Jan 29 09:44:00 crc kubenswrapper[4771]: I0129 09:44:00.805348 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:44:02 crc kubenswrapper[4771]: I0129 09:44:02.828313 4771 generic.go:334] "Generic (PLEG): container finished" podID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerID="d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc" exitCode=0 Jan 29 09:44:02 crc kubenswrapper[4771]: I0129 09:44:02.828461 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l857m" event={"ID":"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d","Type":"ContainerDied","Data":"d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc"} Jan 29 09:44:03 crc kubenswrapper[4771]: I0129 09:44:03.842437 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l857m" event={"ID":"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d","Type":"ContainerStarted","Data":"7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b"} Jan 29 09:44:03 crc kubenswrapper[4771]: I0129 09:44:03.877044 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l857m" podStartSLOduration=2.455718595 podStartE2EDuration="4.877023942s" podCreationTimestamp="2026-01-29 09:43:59 +0000 UTC" firstStartedPulling="2026-01-29 09:44:00.804904061 +0000 UTC m=+2260.927744328" lastFinishedPulling="2026-01-29 09:44:03.226209438 +0000 UTC m=+2263.349049675" observedRunningTime="2026-01-29 09:44:03.869604401 +0000 UTC m=+2263.992444638" watchObservedRunningTime="2026-01-29 09:44:03.877023942 +0000 UTC m=+2263.999864169" Jan 29 09:44:10 crc kubenswrapper[4771]: I0129 09:44:10.020746 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:44:10 crc kubenswrapper[4771]: I0129 09:44:10.021206 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:44:10 crc kubenswrapper[4771]: I0129 09:44:10.069349 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:44:10 crc kubenswrapper[4771]: I0129 09:44:10.973541 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:44:11 crc kubenswrapper[4771]: I0129 09:44:11.030680 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l857m"] Jan 29 09:44:12 crc kubenswrapper[4771]: I0129 09:44:12.926076 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l857m" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerName="registry-server" containerID="cri-o://7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b" gracePeriod=2 Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.459856 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.572489 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-utilities\") pod \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.572612 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-catalog-content\") pod \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.572662 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzk87\" (UniqueName: \"kubernetes.io/projected/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-kube-api-access-dzk87\") pod \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\" (UID: \"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d\") " Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.573288 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-utilities" (OuterVolumeSpecName: "utilities") pod "35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" (UID: "35bbf34e-e05a-4528-b8f0-4d0e2d48c52d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.578880 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-kube-api-access-dzk87" (OuterVolumeSpecName: "kube-api-access-dzk87") pod "35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" (UID: "35bbf34e-e05a-4528-b8f0-4d0e2d48c52d"). InnerVolumeSpecName "kube-api-access-dzk87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.631139 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" (UID: "35bbf34e-e05a-4528-b8f0-4d0e2d48c52d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.675084 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.675124 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.675136 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzk87\" (UniqueName: \"kubernetes.io/projected/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d-kube-api-access-dzk87\") on node \"crc\" DevicePath \"\"" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.938712 4771 generic.go:334] "Generic (PLEG): container finished" podID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerID="7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b" exitCode=0 Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.938760 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l857m" event={"ID":"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d","Type":"ContainerDied","Data":"7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b"} Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.938792 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l857m" event={"ID":"35bbf34e-e05a-4528-b8f0-4d0e2d48c52d","Type":"ContainerDied","Data":"f5c6323c43b022ad70a74f95c14540dd6c1b124129e25c979a214d984f6c7e54"} Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.938795 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l857m" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.938821 4771 scope.go:117] "RemoveContainer" containerID="7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.958188 4771 scope.go:117] "RemoveContainer" containerID="d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.987900 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l857m"] Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.998067 4771 scope.go:117] "RemoveContainer" containerID="9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495" Jan 29 09:44:13 crc kubenswrapper[4771]: I0129 09:44:13.999370 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l857m"] Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.028577 4771 scope.go:117] "RemoveContainer" containerID="7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b" Jan 29 09:44:14 crc kubenswrapper[4771]: E0129 09:44:14.029130 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b\": container with ID starting with 7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b not found: ID does not exist" containerID="7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b" Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.029233 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b"} err="failed to get container status \"7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b\": rpc error: code = NotFound desc = could not find container \"7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b\": container with ID starting with 7d99ec1261388b3ffa025cb143126bef71138e749273efb0c89eabba3fadc03b not found: ID does not exist" Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.029267 4771 scope.go:117] "RemoveContainer" containerID="d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc" Jan 29 09:44:14 crc kubenswrapper[4771]: E0129 09:44:14.029666 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc\": container with ID starting with d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc not found: ID does not exist" containerID="d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc" Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.029742 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc"} err="failed to get container status \"d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc\": rpc error: code = NotFound desc = could not find container \"d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc\": container with ID starting with d851003ea3dbce033c2706885c789901263bb56a79b3ae6169cdc15bd4b049fc not found: ID does not exist" Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.029759 4771 scope.go:117] "RemoveContainer" containerID="9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495" Jan 29 09:44:14 crc kubenswrapper[4771]: E0129 09:44:14.030069 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495\": container with ID starting with 9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495 not found: ID does not exist" containerID="9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495" Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.030096 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495"} err="failed to get container status \"9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495\": rpc error: code = NotFound desc = could not find container \"9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495\": container with ID starting with 9dcbd1908093157d980ff657bc5857fb0071b6fd6634b0a4b3f303462c9ba495 not found: ID does not exist" Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.272061 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.272172 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:44:14 crc kubenswrapper[4771]: I0129 09:44:14.850177 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" path="/var/lib/kubelet/pods/35bbf34e-e05a-4528-b8f0-4d0e2d48c52d/volumes" Jan 29 09:44:44 crc kubenswrapper[4771]: I0129 09:44:44.271249 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:44:44 crc kubenswrapper[4771]: I0129 09:44:44.272232 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:44:44 crc kubenswrapper[4771]: I0129 09:44:44.272313 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:44:44 crc kubenswrapper[4771]: I0129 09:44:44.273654 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:44:44 crc kubenswrapper[4771]: I0129 09:44:44.273767 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" gracePeriod=600 Jan 29 09:44:44 crc kubenswrapper[4771]: E0129 09:44:44.400136 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:44:45 crc kubenswrapper[4771]: I0129 09:44:45.270050 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" exitCode=0 Jan 29 09:44:45 crc kubenswrapper[4771]: I0129 09:44:45.270092 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d"} Jan 29 09:44:45 crc kubenswrapper[4771]: I0129 09:44:45.270125 4771 scope.go:117] "RemoveContainer" containerID="31676f06b71cda577f4f9037e4996c95a39d3e20387b8d818c901817022dfe5a" Jan 29 09:44:45 crc kubenswrapper[4771]: I0129 09:44:45.271224 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:44:45 crc kubenswrapper[4771]: E0129 09:44:45.271670 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:44:58 crc kubenswrapper[4771]: I0129 09:44:58.838551 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:44:58 crc kubenswrapper[4771]: E0129 09:44:58.839517 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.150161 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s"] Jan 29 09:45:00 crc kubenswrapper[4771]: E0129 09:45:00.152934 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerName="extract-utilities" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.153036 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerName="extract-utilities" Jan 29 09:45:00 crc kubenswrapper[4771]: E0129 09:45:00.153109 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerName="extract-content" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.153195 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerName="extract-content" Jan 29 09:45:00 crc kubenswrapper[4771]: E0129 09:45:00.153289 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerName="registry-server" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.153357 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerName="registry-server" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.153670 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="35bbf34e-e05a-4528-b8f0-4d0e2d48c52d" containerName="registry-server" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.154340 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.160939 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s"] Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.168017 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.168492 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.206369 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-config-volume\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.206446 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49h7w\" (UniqueName: \"kubernetes.io/projected/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-kube-api-access-49h7w\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.206670 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-secret-volume\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.307828 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49h7w\" (UniqueName: \"kubernetes.io/projected/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-kube-api-access-49h7w\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.307922 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-secret-volume\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.308010 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-config-volume\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.308984 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-config-volume\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.317490 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-secret-volume\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.324328 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49h7w\" (UniqueName: \"kubernetes.io/projected/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-kube-api-access-49h7w\") pod \"collect-profiles-29494665-85m2s\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.483412 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:00 crc kubenswrapper[4771]: I0129 09:45:00.945810 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s"] Jan 29 09:45:01 crc kubenswrapper[4771]: I0129 09:45:01.420057 4771 generic.go:334] "Generic (PLEG): container finished" podID="4dee3bcc-6778-4d97-9d2d-3947d3326d3d" containerID="804bfc02044aba254b7b8dc15470c16d1dd8d95ff6fd4e704ee0eeed2e89b547" exitCode=0 Jan 29 09:45:01 crc kubenswrapper[4771]: I0129 09:45:01.420321 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" event={"ID":"4dee3bcc-6778-4d97-9d2d-3947d3326d3d","Type":"ContainerDied","Data":"804bfc02044aba254b7b8dc15470c16d1dd8d95ff6fd4e704ee0eeed2e89b547"} Jan 29 09:45:01 crc kubenswrapper[4771]: I0129 09:45:01.420345 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" event={"ID":"4dee3bcc-6778-4d97-9d2d-3947d3326d3d","Type":"ContainerStarted","Data":"cd0a704921183bc487dea3387000185b8594ce686efba2b648d918ce74685389"} Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.750608 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.856663 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49h7w\" (UniqueName: \"kubernetes.io/projected/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-kube-api-access-49h7w\") pod \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.857316 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-secret-volume\") pod \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.857386 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-config-volume\") pod \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\" (UID: \"4dee3bcc-6778-4d97-9d2d-3947d3326d3d\") " Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.857988 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-config-volume" (OuterVolumeSpecName: "config-volume") pod "4dee3bcc-6778-4d97-9d2d-3947d3326d3d" (UID: "4dee3bcc-6778-4d97-9d2d-3947d3326d3d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.863066 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-kube-api-access-49h7w" (OuterVolumeSpecName: "kube-api-access-49h7w") pod "4dee3bcc-6778-4d97-9d2d-3947d3326d3d" (UID: "4dee3bcc-6778-4d97-9d2d-3947d3326d3d"). InnerVolumeSpecName "kube-api-access-49h7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.865577 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4dee3bcc-6778-4d97-9d2d-3947d3326d3d" (UID: "4dee3bcc-6778-4d97-9d2d-3947d3326d3d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.959332 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.959371 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:02 crc kubenswrapper[4771]: I0129 09:45:02.959383 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49h7w\" (UniqueName: \"kubernetes.io/projected/4dee3bcc-6778-4d97-9d2d-3947d3326d3d-kube-api-access-49h7w\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:03 crc kubenswrapper[4771]: I0129 09:45:03.451188 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" event={"ID":"4dee3bcc-6778-4d97-9d2d-3947d3326d3d","Type":"ContainerDied","Data":"cd0a704921183bc487dea3387000185b8594ce686efba2b648d918ce74685389"} Jan 29 09:45:03 crc kubenswrapper[4771]: I0129 09:45:03.451255 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd0a704921183bc487dea3387000185b8594ce686efba2b648d918ce74685389" Jan 29 09:45:03 crc kubenswrapper[4771]: I0129 09:45:03.451266 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494665-85m2s" Jan 29 09:45:03 crc kubenswrapper[4771]: I0129 09:45:03.821010 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz"] Jan 29 09:45:03 crc kubenswrapper[4771]: I0129 09:45:03.828399 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-rg7xz"] Jan 29 09:45:04 crc kubenswrapper[4771]: I0129 09:45:04.850453 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cda2f63-799c-4e05-894d-c0fe721cf974" path="/var/lib/kubelet/pods/8cda2f63-799c-4e05-894d-c0fe721cf974/volumes" Jan 29 09:45:10 crc kubenswrapper[4771]: I0129 09:45:10.847159 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:45:10 crc kubenswrapper[4771]: E0129 09:45:10.848123 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:45:21 crc kubenswrapper[4771]: I0129 09:45:21.839280 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:45:21 crc kubenswrapper[4771]: E0129 09:45:21.840664 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:45:34 crc kubenswrapper[4771]: I0129 09:45:34.838451 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:45:34 crc kubenswrapper[4771]: E0129 09:45:34.839647 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:45:45 crc kubenswrapper[4771]: I0129 09:45:45.580023 4771 scope.go:117] "RemoveContainer" containerID="244f7ca7877b0d49e9d04296616d20be9c2065883eff1967f69a2f5110d17b9c" Jan 29 09:45:48 crc kubenswrapper[4771]: I0129 09:45:48.838536 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:45:48 crc kubenswrapper[4771]: E0129 09:45:48.839344 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:45:58 crc kubenswrapper[4771]: I0129 09:45:58.015509 4771 generic.go:334] "Generic (PLEG): container finished" podID="8cb7a0bd-4a49-4b9c-ae51-86219526db00" containerID="aeb71d2639cced5add2931152208181497e7c63c3e8df9f6c2ec252b3160d9fd" exitCode=0 Jan 29 09:45:58 crc kubenswrapper[4771]: I0129 09:45:58.015648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" event={"ID":"8cb7a0bd-4a49-4b9c-ae51-86219526db00","Type":"ContainerDied","Data":"aeb71d2639cced5add2931152208181497e7c63c3e8df9f6c2ec252b3160d9fd"} Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.446058 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.552920 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-secret-0\") pod \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.553094 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-combined-ca-bundle\") pod \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.553165 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-inventory\") pod \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.553253 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-ssh-key-openstack-edpm-ipam\") pod \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.553279 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9x6v\" (UniqueName: \"kubernetes.io/projected/8cb7a0bd-4a49-4b9c-ae51-86219526db00-kube-api-access-f9x6v\") pod \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\" (UID: \"8cb7a0bd-4a49-4b9c-ae51-86219526db00\") " Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.559402 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8cb7a0bd-4a49-4b9c-ae51-86219526db00" (UID: "8cb7a0bd-4a49-4b9c-ae51-86219526db00"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.559543 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb7a0bd-4a49-4b9c-ae51-86219526db00-kube-api-access-f9x6v" (OuterVolumeSpecName: "kube-api-access-f9x6v") pod "8cb7a0bd-4a49-4b9c-ae51-86219526db00" (UID: "8cb7a0bd-4a49-4b9c-ae51-86219526db00"). InnerVolumeSpecName "kube-api-access-f9x6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.586242 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8cb7a0bd-4a49-4b9c-ae51-86219526db00" (UID: "8cb7a0bd-4a49-4b9c-ae51-86219526db00"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.586623 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-inventory" (OuterVolumeSpecName: "inventory") pod "8cb7a0bd-4a49-4b9c-ae51-86219526db00" (UID: "8cb7a0bd-4a49-4b9c-ae51-86219526db00"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.587828 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8cb7a0bd-4a49-4b9c-ae51-86219526db00" (UID: "8cb7a0bd-4a49-4b9c-ae51-86219526db00"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.656490 4771 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.656530 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.656541 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.656551 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9x6v\" (UniqueName: \"kubernetes.io/projected/8cb7a0bd-4a49-4b9c-ae51-86219526db00-kube-api-access-f9x6v\") on node \"crc\" DevicePath \"\"" Jan 29 09:45:59 crc kubenswrapper[4771]: I0129 09:45:59.656566 4771 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8cb7a0bd-4a49-4b9c-ae51-86219526db00-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.034262 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" event={"ID":"8cb7a0bd-4a49-4b9c-ae51-86219526db00","Type":"ContainerDied","Data":"a5fc3f7b88ace498fa458486309c55b691cffb2716af842d6512e7b3e6c722bc"} Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.034301 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5fc3f7b88ace498fa458486309c55b691cffb2716af842d6512e7b3e6c722bc" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.034303 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tw2th" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.151996 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx"] Jan 29 09:46:00 crc kubenswrapper[4771]: E0129 09:46:00.152375 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dee3bcc-6778-4d97-9d2d-3947d3326d3d" containerName="collect-profiles" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.152392 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dee3bcc-6778-4d97-9d2d-3947d3326d3d" containerName="collect-profiles" Jan 29 09:46:00 crc kubenswrapper[4771]: E0129 09:46:00.152401 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb7a0bd-4a49-4b9c-ae51-86219526db00" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.152410 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb7a0bd-4a49-4b9c-ae51-86219526db00" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.152608 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dee3bcc-6778-4d97-9d2d-3947d3326d3d" containerName="collect-profiles" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.152640 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb7a0bd-4a49-4b9c-ae51-86219526db00" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.153221 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.156188 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.156412 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.156570 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.156744 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.157063 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.157788 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.157795 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.167883 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx"] Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.266537 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tdf\" (UniqueName: \"kubernetes.io/projected/54a83150-21fe-4085-ad4c-5eb77724684a-kube-api-access-f5tdf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.266612 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.266631 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.266849 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.266970 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.267044 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/54a83150-21fe-4085-ad4c-5eb77724684a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.267121 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.267183 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.267413 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.369319 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.369392 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tdf\" (UniqueName: \"kubernetes.io/projected/54a83150-21fe-4085-ad4c-5eb77724684a-kube-api-access-f5tdf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.369435 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.369454 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.369483 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.369508 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.369532 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/54a83150-21fe-4085-ad4c-5eb77724684a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.370998 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.371995 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/54a83150-21fe-4085-ad4c-5eb77724684a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.372070 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.374500 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.374537 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.374883 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.375258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.375682 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.377374 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.380754 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.391019 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tdf\" (UniqueName: \"kubernetes.io/projected/54a83150-21fe-4085-ad4c-5eb77724684a-kube-api-access-f5tdf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-8wnhx\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:00 crc kubenswrapper[4771]: I0129 09:46:00.470207 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:46:01 crc kubenswrapper[4771]: I0129 09:46:01.010778 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx"] Jan 29 09:46:01 crc kubenswrapper[4771]: I0129 09:46:01.058091 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" event={"ID":"54a83150-21fe-4085-ad4c-5eb77724684a","Type":"ContainerStarted","Data":"0d7bf23b46b131fc87476742dc6d2c12606312f94985b0e8c193c805b0d3fc40"} Jan 29 09:46:01 crc kubenswrapper[4771]: I0129 09:46:01.839217 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:46:01 crc kubenswrapper[4771]: E0129 09:46:01.839643 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:46:02 crc kubenswrapper[4771]: I0129 09:46:02.066914 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" event={"ID":"54a83150-21fe-4085-ad4c-5eb77724684a","Type":"ContainerStarted","Data":"487cf3879ba14824499bf345c5e977a4418120704c2e35dc35b303aa771b7832"} Jan 29 09:46:02 crc kubenswrapper[4771]: I0129 09:46:02.085835 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" podStartSLOduration=1.607680694 podStartE2EDuration="2.085814015s" podCreationTimestamp="2026-01-29 09:46:00 +0000 UTC" firstStartedPulling="2026-01-29 09:46:01.022545328 +0000 UTC m=+2381.145385595" lastFinishedPulling="2026-01-29 09:46:01.500678699 +0000 UTC m=+2381.623518916" observedRunningTime="2026-01-29 09:46:02.082734071 +0000 UTC m=+2382.205574328" watchObservedRunningTime="2026-01-29 09:46:02.085814015 +0000 UTC m=+2382.208654252" Jan 29 09:46:14 crc kubenswrapper[4771]: I0129 09:46:14.838474 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:46:14 crc kubenswrapper[4771]: E0129 09:46:14.839240 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:46:25 crc kubenswrapper[4771]: I0129 09:46:25.838806 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:46:25 crc kubenswrapper[4771]: E0129 09:46:25.839728 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:46:40 crc kubenswrapper[4771]: I0129 09:46:40.849953 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:46:40 crc kubenswrapper[4771]: E0129 09:46:40.850980 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:46:51 crc kubenswrapper[4771]: I0129 09:46:51.837884 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:46:51 crc kubenswrapper[4771]: E0129 09:46:51.838770 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:47:04 crc kubenswrapper[4771]: I0129 09:47:04.838971 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:47:04 crc kubenswrapper[4771]: E0129 09:47:04.842238 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:47:16 crc kubenswrapper[4771]: I0129 09:47:16.838077 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:47:16 crc kubenswrapper[4771]: E0129 09:47:16.838752 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:47:28 crc kubenswrapper[4771]: I0129 09:47:28.839549 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:47:28 crc kubenswrapper[4771]: E0129 09:47:28.840487 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:47:42 crc kubenswrapper[4771]: I0129 09:47:42.838622 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:47:42 crc kubenswrapper[4771]: E0129 09:47:42.839672 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:47:56 crc kubenswrapper[4771]: I0129 09:47:56.838155 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:47:56 crc kubenswrapper[4771]: E0129 09:47:56.838853 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:48:08 crc kubenswrapper[4771]: I0129 09:48:08.837940 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:48:08 crc kubenswrapper[4771]: E0129 09:48:08.839976 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:48:12 crc kubenswrapper[4771]: I0129 09:48:12.794604 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1ea3117f-141f-46c2-bee3-71a88181068c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 29 09:48:12 crc kubenswrapper[4771]: I0129 09:48:12.794732 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1ea3117f-141f-46c2-bee3-71a88181068c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 29 09:48:19 crc kubenswrapper[4771]: I0129 09:48:19.398752 4771 generic.go:334] "Generic (PLEG): container finished" podID="54a83150-21fe-4085-ad4c-5eb77724684a" containerID="487cf3879ba14824499bf345c5e977a4418120704c2e35dc35b303aa771b7832" exitCode=0 Jan 29 09:48:19 crc kubenswrapper[4771]: I0129 09:48:19.398843 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" event={"ID":"54a83150-21fe-4085-ad4c-5eb77724684a","Type":"ContainerDied","Data":"487cf3879ba14824499bf345c5e977a4418120704c2e35dc35b303aa771b7832"} Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.831997 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.931740 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-combined-ca-bundle\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.931873 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-0\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.931919 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-1\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.931950 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5tdf\" (UniqueName: \"kubernetes.io/projected/54a83150-21fe-4085-ad4c-5eb77724684a-kube-api-access-f5tdf\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.932024 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/54a83150-21fe-4085-ad4c-5eb77724684a-nova-extra-config-0\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.932612 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-inventory\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.932759 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-0\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.932789 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-ssh-key-openstack-edpm-ipam\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.932841 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-1\") pod \"54a83150-21fe-4085-ad4c-5eb77724684a\" (UID: \"54a83150-21fe-4085-ad4c-5eb77724684a\") " Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.938904 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54a83150-21fe-4085-ad4c-5eb77724684a-kube-api-access-f5tdf" (OuterVolumeSpecName: "kube-api-access-f5tdf") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "kube-api-access-f5tdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.939840 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.960057 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.961544 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.962026 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.964845 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.965413 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54a83150-21fe-4085-ad4c-5eb77724684a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.966895 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-inventory" (OuterVolumeSpecName: "inventory") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:48:20 crc kubenswrapper[4771]: I0129 09:48:20.975501 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "54a83150-21fe-4085-ad4c-5eb77724684a" (UID: "54a83150-21fe-4085-ad4c-5eb77724684a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036787 4771 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/54a83150-21fe-4085-ad4c-5eb77724684a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036870 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036885 4771 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036900 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036912 4771 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036923 4771 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036935 4771 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036945 4771 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/54a83150-21fe-4085-ad4c-5eb77724684a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.036956 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5tdf\" (UniqueName: \"kubernetes.io/projected/54a83150-21fe-4085-ad4c-5eb77724684a-kube-api-access-f5tdf\") on node \"crc\" DevicePath \"\"" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.422455 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" event={"ID":"54a83150-21fe-4085-ad4c-5eb77724684a","Type":"ContainerDied","Data":"0d7bf23b46b131fc87476742dc6d2c12606312f94985b0e8c193c805b0d3fc40"} Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.422504 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d7bf23b46b131fc87476742dc6d2c12606312f94985b0e8c193c805b0d3fc40" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.422519 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-8wnhx" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.541945 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6"] Jan 29 09:48:21 crc kubenswrapper[4771]: E0129 09:48:21.542503 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54a83150-21fe-4085-ad4c-5eb77724684a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.542531 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="54a83150-21fe-4085-ad4c-5eb77724684a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.542955 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="54a83150-21fe-4085-ad4c-5eb77724684a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.543755 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.546000 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v45d2" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.562630 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.562772 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.562772 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.563022 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.574577 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6"] Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.650835 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.650948 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.651024 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.651134 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.651220 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzm6p\" (UniqueName: \"kubernetes.io/projected/10f62904-f030-4614-accd-3c95e39c2c6a-kube-api-access-vzm6p\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.651403 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.651457 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.752811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.752857 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.752894 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.752920 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.752955 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.753008 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.753043 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzm6p\" (UniqueName: \"kubernetes.io/projected/10f62904-f030-4614-accd-3c95e39c2c6a-kube-api-access-vzm6p\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.756993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.757317 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.757477 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.757897 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.758369 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.758580 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.777876 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzm6p\" (UniqueName: \"kubernetes.io/projected/10f62904-f030-4614-accd-3c95e39c2c6a-kube-api-access-vzm6p\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:21 crc kubenswrapper[4771]: I0129 09:48:21.873217 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:48:22 crc kubenswrapper[4771]: I0129 09:48:22.369184 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6"] Jan 29 09:48:22 crc kubenswrapper[4771]: W0129 09:48:22.372579 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10f62904_f030_4614_accd_3c95e39c2c6a.slice/crio-6dc359033dc9757052ff51bd5d74c6aef1565ffcc060d996a4780f51d45ec5f6 WatchSource:0}: Error finding container 6dc359033dc9757052ff51bd5d74c6aef1565ffcc060d996a4780f51d45ec5f6: Status 404 returned error can't find the container with id 6dc359033dc9757052ff51bd5d74c6aef1565ffcc060d996a4780f51d45ec5f6 Jan 29 09:48:22 crc kubenswrapper[4771]: I0129 09:48:22.430021 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" event={"ID":"10f62904-f030-4614-accd-3c95e39c2c6a","Type":"ContainerStarted","Data":"6dc359033dc9757052ff51bd5d74c6aef1565ffcc060d996a4780f51d45ec5f6"} Jan 29 09:48:22 crc kubenswrapper[4771]: I0129 09:48:22.837821 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:48:22 crc kubenswrapper[4771]: E0129 09:48:22.838503 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:48:23 crc kubenswrapper[4771]: I0129 09:48:23.441620 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" event={"ID":"10f62904-f030-4614-accd-3c95e39c2c6a","Type":"ContainerStarted","Data":"f741d1dc12955797ca213f0f20760e12c21d2fa88745a5b441bb26f8a43e7235"} Jan 29 09:48:23 crc kubenswrapper[4771]: I0129 09:48:23.480567 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" podStartSLOduration=2.05408215 podStartE2EDuration="2.480548167s" podCreationTimestamp="2026-01-29 09:48:21 +0000 UTC" firstStartedPulling="2026-01-29 09:48:22.375397314 +0000 UTC m=+2522.498237541" lastFinishedPulling="2026-01-29 09:48:22.801863331 +0000 UTC m=+2522.924703558" observedRunningTime="2026-01-29 09:48:23.472329334 +0000 UTC m=+2523.595169561" watchObservedRunningTime="2026-01-29 09:48:23.480548167 +0000 UTC m=+2523.603388394" Jan 29 09:48:37 crc kubenswrapper[4771]: I0129 09:48:37.837885 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:48:37 crc kubenswrapper[4771]: E0129 09:48:37.838617 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:48:50 crc kubenswrapper[4771]: I0129 09:48:50.849089 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:48:50 crc kubenswrapper[4771]: E0129 09:48:50.849799 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:49:03 crc kubenswrapper[4771]: I0129 09:49:03.839314 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:49:03 crc kubenswrapper[4771]: E0129 09:49:03.840387 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:49:16 crc kubenswrapper[4771]: I0129 09:49:16.838848 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:49:16 crc kubenswrapper[4771]: E0129 09:49:16.839857 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:49:22 crc kubenswrapper[4771]: I0129 09:49:22.794024 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1ea3117f-141f-46c2-bee3-71a88181068c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 29 09:49:27 crc kubenswrapper[4771]: I0129 09:49:27.793944 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1ea3117f-141f-46c2-bee3-71a88181068c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 29 09:49:30 crc kubenswrapper[4771]: I0129 09:49:30.843943 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:49:30 crc kubenswrapper[4771]: E0129 09:49:30.844779 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:49:32 crc kubenswrapper[4771]: I0129 09:49:32.253402 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="1ea3117f-141f-46c2-bee3-71a88181068c" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 29 09:49:32 crc kubenswrapper[4771]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 29 09:49:32 crc kubenswrapper[4771]: > Jan 29 09:49:32 crc kubenswrapper[4771]: I0129 09:49:32.253798 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 29 09:49:32 crc kubenswrapper[4771]: I0129 09:49:32.254940 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"a9c3a7dac56bcc9c9a4d60fc9aea95f071a793b2ef06baf89b531a4a8e66b366"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 29 09:49:32 crc kubenswrapper[4771]: I0129 09:49:32.255018 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea3117f-141f-46c2-bee3-71a88181068c" containerName="ceilometer-central-agent" containerID="cri-o://a9c3a7dac56bcc9c9a4d60fc9aea95f071a793b2ef06baf89b531a4a8e66b366" gracePeriod=30 Jan 29 09:49:33 crc kubenswrapper[4771]: I0129 09:49:33.128533 4771 generic.go:334] "Generic (PLEG): container finished" podID="1ea3117f-141f-46c2-bee3-71a88181068c" containerID="a9c3a7dac56bcc9c9a4d60fc9aea95f071a793b2ef06baf89b531a4a8e66b366" exitCode=0 Jan 29 09:49:33 crc kubenswrapper[4771]: I0129 09:49:33.128610 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea3117f-141f-46c2-bee3-71a88181068c","Type":"ContainerDied","Data":"a9c3a7dac56bcc9c9a4d60fc9aea95f071a793b2ef06baf89b531a4a8e66b366"} Jan 29 09:49:33 crc kubenswrapper[4771]: I0129 09:49:33.192020 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:49:34 crc kubenswrapper[4771]: I0129 09:49:34.140678 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea3117f-141f-46c2-bee3-71a88181068c","Type":"ContainerStarted","Data":"5b969a1f7434a696e02b96cf8f7211fa0b1b327f4f6184f31c15c0d81844de08"} Jan 29 09:49:41 crc kubenswrapper[4771]: I0129 09:49:41.838205 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:49:41 crc kubenswrapper[4771]: E0129 09:49:41.838999 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:49:55 crc kubenswrapper[4771]: I0129 09:49:55.838340 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:49:56 crc kubenswrapper[4771]: I0129 09:49:56.328064 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"2bb1893b4d187f4e0412bac35cff8b156d40bf788059603b5f775d27a03e618f"} Jan 29 09:51:06 crc kubenswrapper[4771]: I0129 09:51:06.947126 4771 generic.go:334] "Generic (PLEG): container finished" podID="10f62904-f030-4614-accd-3c95e39c2c6a" containerID="f741d1dc12955797ca213f0f20760e12c21d2fa88745a5b441bb26f8a43e7235" exitCode=0 Jan 29 09:51:06 crc kubenswrapper[4771]: I0129 09:51:06.947181 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" event={"ID":"10f62904-f030-4614-accd-3c95e39c2c6a","Type":"ContainerDied","Data":"f741d1dc12955797ca213f0f20760e12c21d2fa88745a5b441bb26f8a43e7235"} Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.363072 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.419226 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ssh-key-openstack-edpm-ipam\") pod \"10f62904-f030-4614-accd-3c95e39c2c6a\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.419281 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-0\") pod \"10f62904-f030-4614-accd-3c95e39c2c6a\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.419315 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzm6p\" (UniqueName: \"kubernetes.io/projected/10f62904-f030-4614-accd-3c95e39c2c6a-kube-api-access-vzm6p\") pod \"10f62904-f030-4614-accd-3c95e39c2c6a\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.419354 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-2\") pod \"10f62904-f030-4614-accd-3c95e39c2c6a\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.419375 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-1\") pod \"10f62904-f030-4614-accd-3c95e39c2c6a\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.419407 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-telemetry-combined-ca-bundle\") pod \"10f62904-f030-4614-accd-3c95e39c2c6a\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.419424 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-inventory\") pod \"10f62904-f030-4614-accd-3c95e39c2c6a\" (UID: \"10f62904-f030-4614-accd-3c95e39c2c6a\") " Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.426036 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "10f62904-f030-4614-accd-3c95e39c2c6a" (UID: "10f62904-f030-4614-accd-3c95e39c2c6a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.426952 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f62904-f030-4614-accd-3c95e39c2c6a-kube-api-access-vzm6p" (OuterVolumeSpecName: "kube-api-access-vzm6p") pod "10f62904-f030-4614-accd-3c95e39c2c6a" (UID: "10f62904-f030-4614-accd-3c95e39c2c6a"). InnerVolumeSpecName "kube-api-access-vzm6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.450592 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "10f62904-f030-4614-accd-3c95e39c2c6a" (UID: "10f62904-f030-4614-accd-3c95e39c2c6a"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.457872 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "10f62904-f030-4614-accd-3c95e39c2c6a" (UID: "10f62904-f030-4614-accd-3c95e39c2c6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.458270 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "10f62904-f030-4614-accd-3c95e39c2c6a" (UID: "10f62904-f030-4614-accd-3c95e39c2c6a"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.460753 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-inventory" (OuterVolumeSpecName: "inventory") pod "10f62904-f030-4614-accd-3c95e39c2c6a" (UID: "10f62904-f030-4614-accd-3c95e39c2c6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.470732 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "10f62904-f030-4614-accd-3c95e39c2c6a" (UID: "10f62904-f030-4614-accd-3c95e39c2c6a"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.522089 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.522115 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.522129 4771 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.522139 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.522147 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.522155 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/10f62904-f030-4614-accd-3c95e39c2c6a-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.522164 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzm6p\" (UniqueName: \"kubernetes.io/projected/10f62904-f030-4614-accd-3c95e39c2c6a-kube-api-access-vzm6p\") on node \"crc\" DevicePath \"\"" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.965627 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" event={"ID":"10f62904-f030-4614-accd-3c95e39c2c6a","Type":"ContainerDied","Data":"6dc359033dc9757052ff51bd5d74c6aef1565ffcc060d996a4780f51d45ec5f6"} Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.965659 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6" Jan 29 09:51:08 crc kubenswrapper[4771]: I0129 09:51:08.965670 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc359033dc9757052ff51bd5d74c6aef1565ffcc060d996a4780f51d45ec5f6" Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.914653 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:52:01 crc kubenswrapper[4771]: E0129 09:52:01.916171 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f62904-f030-4614-accd-3c95e39c2c6a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.916196 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f62904-f030-4614-accd-3c95e39c2c6a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.916886 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f62904-f030-4614-accd-3c95e39c2c6a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.918005 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.928030 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-trsvj" Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.928145 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.928156 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.930878 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:52:01 crc kubenswrapper[4771]: I0129 09:52:01.931821 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070563 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070636 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070686 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p92vm\" (UniqueName: \"kubernetes.io/projected/be095875-5658-44fb-9c4b-90d1bc093cf3-kube-api-access-p92vm\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070746 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070797 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070816 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070863 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070882 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-config-data\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.070899 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.172333 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p92vm\" (UniqueName: \"kubernetes.io/projected/be095875-5658-44fb-9c4b-90d1bc093cf3-kube-api-access-p92vm\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.172417 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.172477 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.173521 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.173610 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.173643 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-config-data\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.173667 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.173757 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.173820 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.174240 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.177258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.177268 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.179110 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-config-data\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.181519 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.183174 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.186480 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.189659 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.192425 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p92vm\" (UniqueName: \"kubernetes.io/projected/be095875-5658-44fb-9c4b-90d1bc093cf3-kube-api-access-p92vm\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.205236 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.247330 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:52:02 crc kubenswrapper[4771]: I0129 09:52:02.726344 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:52:02 crc kubenswrapper[4771]: W0129 09:52:02.729838 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe095875_5658_44fb_9c4b_90d1bc093cf3.slice/crio-d94549f5bb65584eaf5e3da534bf9b43c515de134aac8a3d50253fafe03bd83c WatchSource:0}: Error finding container d94549f5bb65584eaf5e3da534bf9b43c515de134aac8a3d50253fafe03bd83c: Status 404 returned error can't find the container with id d94549f5bb65584eaf5e3da534bf9b43c515de134aac8a3d50253fafe03bd83c Jan 29 09:52:03 crc kubenswrapper[4771]: I0129 09:52:03.456526 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"be095875-5658-44fb-9c4b-90d1bc093cf3","Type":"ContainerStarted","Data":"d94549f5bb65584eaf5e3da534bf9b43c515de134aac8a3d50253fafe03bd83c"} Jan 29 09:52:14 crc kubenswrapper[4771]: I0129 09:52:14.272438 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:52:14 crc kubenswrapper[4771]: I0129 09:52:14.273353 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:52:43 crc kubenswrapper[4771]: E0129 09:52:43.632892 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 29 09:52:43 crc kubenswrapper[4771]: E0129 09:52:43.633560 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p92vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(be095875-5658-44fb-9c4b-90d1bc093cf3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:52:43 crc kubenswrapper[4771]: E0129 09:52:43.634837 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="be095875-5658-44fb-9c4b-90d1bc093cf3" Jan 29 09:52:43 crc kubenswrapper[4771]: E0129 09:52:43.867599 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="be095875-5658-44fb-9c4b-90d1bc093cf3" Jan 29 09:52:44 crc kubenswrapper[4771]: I0129 09:52:44.271395 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:52:44 crc kubenswrapper[4771]: I0129 09:52:44.271454 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.753645 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dxwjn"] Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.756941 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.769508 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dxwjn"] Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.791770 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-utilities\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.791840 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gw7b\" (UniqueName: \"kubernetes.io/projected/a26f0e76-1d80-4c56-966f-61c9bc46c85c-kube-api-access-6gw7b\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.791895 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-catalog-content\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.893068 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gw7b\" (UniqueName: \"kubernetes.io/projected/a26f0e76-1d80-4c56-966f-61c9bc46c85c-kube-api-access-6gw7b\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.893165 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-catalog-content\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.893311 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-utilities\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.893824 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-utilities\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.893857 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-catalog-content\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:45 crc kubenswrapper[4771]: I0129 09:52:45.915688 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gw7b\" (UniqueName: \"kubernetes.io/projected/a26f0e76-1d80-4c56-966f-61c9bc46c85c-kube-api-access-6gw7b\") pod \"community-operators-dxwjn\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:46 crc kubenswrapper[4771]: I0129 09:52:46.077898 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:46 crc kubenswrapper[4771]: I0129 09:52:46.600117 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dxwjn"] Jan 29 09:52:46 crc kubenswrapper[4771]: I0129 09:52:46.891257 4771 generic.go:334] "Generic (PLEG): container finished" podID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerID="466eb388b9472867dd4dccc464152fe4aaf08d4b2874da4adffe45d9b78a9772" exitCode=0 Jan 29 09:52:46 crc kubenswrapper[4771]: I0129 09:52:46.891334 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxwjn" event={"ID":"a26f0e76-1d80-4c56-966f-61c9bc46c85c","Type":"ContainerDied","Data":"466eb388b9472867dd4dccc464152fe4aaf08d4b2874da4adffe45d9b78a9772"} Jan 29 09:52:46 crc kubenswrapper[4771]: I0129 09:52:46.891691 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxwjn" event={"ID":"a26f0e76-1d80-4c56-966f-61c9bc46c85c","Type":"ContainerStarted","Data":"9b07493c186850118e344707f0bc1108d94da344169b0e41dc8afc47391ecd62"} Jan 29 09:52:47 crc kubenswrapper[4771]: I0129 09:52:47.909504 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxwjn" event={"ID":"a26f0e76-1d80-4c56-966f-61c9bc46c85c","Type":"ContainerStarted","Data":"75316cb4b2308bc7491fa120420a54d25bceaafccb5166492965326c84adfbc2"} Jan 29 09:52:48 crc kubenswrapper[4771]: I0129 09:52:48.923845 4771 generic.go:334] "Generic (PLEG): container finished" podID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerID="75316cb4b2308bc7491fa120420a54d25bceaafccb5166492965326c84adfbc2" exitCode=0 Jan 29 09:52:48 crc kubenswrapper[4771]: I0129 09:52:48.924053 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxwjn" event={"ID":"a26f0e76-1d80-4c56-966f-61c9bc46c85c","Type":"ContainerDied","Data":"75316cb4b2308bc7491fa120420a54d25bceaafccb5166492965326c84adfbc2"} Jan 29 09:52:49 crc kubenswrapper[4771]: I0129 09:52:49.941000 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxwjn" event={"ID":"a26f0e76-1d80-4c56-966f-61c9bc46c85c","Type":"ContainerStarted","Data":"77a65f21e293ce727f1bbacc8aef282582f73d4ab5a4f7144ffe8b340c24bd43"} Jan 29 09:52:49 crc kubenswrapper[4771]: I0129 09:52:49.970758 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dxwjn" podStartSLOduration=2.52228537 podStartE2EDuration="4.970736454s" podCreationTimestamp="2026-01-29 09:52:45 +0000 UTC" firstStartedPulling="2026-01-29 09:52:46.894144818 +0000 UTC m=+2787.016985045" lastFinishedPulling="2026-01-29 09:52:49.342595902 +0000 UTC m=+2789.465436129" observedRunningTime="2026-01-29 09:52:49.964508654 +0000 UTC m=+2790.087348881" watchObservedRunningTime="2026-01-29 09:52:49.970736454 +0000 UTC m=+2790.093576681" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.342641 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j28p4"] Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.345972 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.364445 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j28p4"] Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.458911 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-catalog-content\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.458969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxmp\" (UniqueName: \"kubernetes.io/projected/db1b17e6-0ea6-4124-9a4a-e076586623e1-kube-api-access-5sxmp\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.459078 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-utilities\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.560585 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-utilities\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.560736 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-catalog-content\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.560757 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sxmp\" (UniqueName: \"kubernetes.io/projected/db1b17e6-0ea6-4124-9a4a-e076586623e1-kube-api-access-5sxmp\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.561540 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-catalog-content\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.561607 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-utilities\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.580336 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sxmp\" (UniqueName: \"kubernetes.io/projected/db1b17e6-0ea6-4124-9a4a-e076586623e1-kube-api-access-5sxmp\") pod \"redhat-operators-j28p4\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:53 crc kubenswrapper[4771]: I0129 09:52:53.669833 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:52:54 crc kubenswrapper[4771]: I0129 09:52:54.131148 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j28p4"] Jan 29 09:52:54 crc kubenswrapper[4771]: W0129 09:52:54.131628 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb1b17e6_0ea6_4124_9a4a_e076586623e1.slice/crio-2e7c889f7986ae8a2ba487e7bcb5d0eb2ff9e0d6b88f075941332411b691e2ee WatchSource:0}: Error finding container 2e7c889f7986ae8a2ba487e7bcb5d0eb2ff9e0d6b88f075941332411b691e2ee: Status 404 returned error can't find the container with id 2e7c889f7986ae8a2ba487e7bcb5d0eb2ff9e0d6b88f075941332411b691e2ee Jan 29 09:52:55 crc kubenswrapper[4771]: I0129 09:52:55.007798 4771 generic.go:334] "Generic (PLEG): container finished" podID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerID="8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63" exitCode=0 Jan 29 09:52:55 crc kubenswrapper[4771]: I0129 09:52:55.008393 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j28p4" event={"ID":"db1b17e6-0ea6-4124-9a4a-e076586623e1","Type":"ContainerDied","Data":"8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63"} Jan 29 09:52:55 crc kubenswrapper[4771]: I0129 09:52:55.008441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j28p4" event={"ID":"db1b17e6-0ea6-4124-9a4a-e076586623e1","Type":"ContainerStarted","Data":"2e7c889f7986ae8a2ba487e7bcb5d0eb2ff9e0d6b88f075941332411b691e2ee"} Jan 29 09:52:56 crc kubenswrapper[4771]: I0129 09:52:56.021211 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j28p4" event={"ID":"db1b17e6-0ea6-4124-9a4a-e076586623e1","Type":"ContainerStarted","Data":"26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6"} Jan 29 09:52:56 crc kubenswrapper[4771]: I0129 09:52:56.078359 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:56 crc kubenswrapper[4771]: I0129 09:52:56.078414 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:56 crc kubenswrapper[4771]: I0129 09:52:56.151557 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:57 crc kubenswrapper[4771]: I0129 09:52:57.046681 4771 generic.go:334] "Generic (PLEG): container finished" podID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerID="26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6" exitCode=0 Jan 29 09:52:57 crc kubenswrapper[4771]: I0129 09:52:57.047420 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j28p4" event={"ID":"db1b17e6-0ea6-4124-9a4a-e076586623e1","Type":"ContainerDied","Data":"26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6"} Jan 29 09:52:57 crc kubenswrapper[4771]: I0129 09:52:57.120867 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:52:58 crc kubenswrapper[4771]: I0129 09:52:58.531629 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dxwjn"] Jan 29 09:52:59 crc kubenswrapper[4771]: I0129 09:52:59.067589 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dxwjn" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerName="registry-server" containerID="cri-o://77a65f21e293ce727f1bbacc8aef282582f73d4ab5a4f7144ffe8b340c24bd43" gracePeriod=2 Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.079684 4771 generic.go:334] "Generic (PLEG): container finished" podID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerID="77a65f21e293ce727f1bbacc8aef282582f73d4ab5a4f7144ffe8b340c24bd43" exitCode=0 Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.079738 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxwjn" event={"ID":"a26f0e76-1d80-4c56-966f-61c9bc46c85c","Type":"ContainerDied","Data":"77a65f21e293ce727f1bbacc8aef282582f73d4ab5a4f7144ffe8b340c24bd43"} Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.429999 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.850350 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.930551 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-catalog-content\") pod \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.930847 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-utilities\") pod \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.930891 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gw7b\" (UniqueName: \"kubernetes.io/projected/a26f0e76-1d80-4c56-966f-61c9bc46c85c-kube-api-access-6gw7b\") pod \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\" (UID: \"a26f0e76-1d80-4c56-966f-61c9bc46c85c\") " Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.934634 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-utilities" (OuterVolumeSpecName: "utilities") pod "a26f0e76-1d80-4c56-966f-61c9bc46c85c" (UID: "a26f0e76-1d80-4c56-966f-61c9bc46c85c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.937396 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a26f0e76-1d80-4c56-966f-61c9bc46c85c-kube-api-access-6gw7b" (OuterVolumeSpecName: "kube-api-access-6gw7b") pod "a26f0e76-1d80-4c56-966f-61c9bc46c85c" (UID: "a26f0e76-1d80-4c56-966f-61c9bc46c85c"). InnerVolumeSpecName "kube-api-access-6gw7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:53:00 crc kubenswrapper[4771]: I0129 09:53:00.979016 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a26f0e76-1d80-4c56-966f-61c9bc46c85c" (UID: "a26f0e76-1d80-4c56-966f-61c9bc46c85c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.033506 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.033545 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gw7b\" (UniqueName: \"kubernetes.io/projected/a26f0e76-1d80-4c56-966f-61c9bc46c85c-kube-api-access-6gw7b\") on node \"crc\" DevicePath \"\"" Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.033556 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26f0e76-1d80-4c56-966f-61c9bc46c85c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.095284 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxwjn" event={"ID":"a26f0e76-1d80-4c56-966f-61c9bc46c85c","Type":"ContainerDied","Data":"9b07493c186850118e344707f0bc1108d94da344169b0e41dc8afc47391ecd62"} Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.095349 4771 scope.go:117] "RemoveContainer" containerID="77a65f21e293ce727f1bbacc8aef282582f73d4ab5a4f7144ffe8b340c24bd43" Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.095374 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxwjn" Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.129215 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dxwjn"] Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.146872 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dxwjn"] Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.147976 4771 scope.go:117] "RemoveContainer" containerID="75316cb4b2308bc7491fa120420a54d25bceaafccb5166492965326c84adfbc2" Jan 29 09:53:01 crc kubenswrapper[4771]: I0129 09:53:01.179398 4771 scope.go:117] "RemoveContainer" containerID="466eb388b9472867dd4dccc464152fe4aaf08d4b2874da4adffe45d9b78a9772" Jan 29 09:53:02 crc kubenswrapper[4771]: I0129 09:53:02.108138 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"be095875-5658-44fb-9c4b-90d1bc093cf3","Type":"ContainerStarted","Data":"6256b01082efafe6d0b5400478058c6e17a8c1b78df5e409ecb00514b748569a"} Jan 29 09:53:02 crc kubenswrapper[4771]: I0129 09:53:02.115096 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j28p4" event={"ID":"db1b17e6-0ea6-4124-9a4a-e076586623e1","Type":"ContainerStarted","Data":"44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f"} Jan 29 09:53:02 crc kubenswrapper[4771]: I0129 09:53:02.146115 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j28p4" podStartSLOduration=2.979391315 podStartE2EDuration="9.146087205s" podCreationTimestamp="2026-01-29 09:52:53 +0000 UTC" firstStartedPulling="2026-01-29 09:52:55.012649715 +0000 UTC m=+2795.135489962" lastFinishedPulling="2026-01-29 09:53:01.179345625 +0000 UTC m=+2801.302185852" observedRunningTime="2026-01-29 09:53:02.139128565 +0000 UTC m=+2802.261968842" watchObservedRunningTime="2026-01-29 09:53:02.146087205 +0000 UTC m=+2802.268927442" Jan 29 09:53:02 crc kubenswrapper[4771]: I0129 09:53:02.148753 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.463250169 podStartE2EDuration="1m2.148736487s" podCreationTimestamp="2026-01-29 09:52:00 +0000 UTC" firstStartedPulling="2026-01-29 09:52:02.732644726 +0000 UTC m=+2742.855484953" lastFinishedPulling="2026-01-29 09:53:00.418131044 +0000 UTC m=+2800.540971271" observedRunningTime="2026-01-29 09:53:02.12458182 +0000 UTC m=+2802.247422047" watchObservedRunningTime="2026-01-29 09:53:02.148736487 +0000 UTC m=+2802.271576724" Jan 29 09:53:02 crc kubenswrapper[4771]: I0129 09:53:02.850835 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" path="/var/lib/kubelet/pods/a26f0e76-1d80-4c56-966f-61c9bc46c85c/volumes" Jan 29 09:53:03 crc kubenswrapper[4771]: I0129 09:53:03.670866 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:53:03 crc kubenswrapper[4771]: I0129 09:53:03.671191 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:53:04 crc kubenswrapper[4771]: I0129 09:53:04.730513 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j28p4" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="registry-server" probeResult="failure" output=< Jan 29 09:53:04 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:53:04 crc kubenswrapper[4771]: > Jan 29 09:53:13 crc kubenswrapper[4771]: I0129 09:53:13.719154 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:53:13 crc kubenswrapper[4771]: I0129 09:53:13.781084 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:53:13 crc kubenswrapper[4771]: I0129 09:53:13.958277 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j28p4"] Jan 29 09:53:14 crc kubenswrapper[4771]: I0129 09:53:14.271623 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:53:14 crc kubenswrapper[4771]: I0129 09:53:14.271708 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:53:14 crc kubenswrapper[4771]: I0129 09:53:14.271761 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:53:14 crc kubenswrapper[4771]: I0129 09:53:14.272291 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bb1893b4d187f4e0412bac35cff8b156d40bf788059603b5f775d27a03e618f"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:53:14 crc kubenswrapper[4771]: I0129 09:53:14.272358 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://2bb1893b4d187f4e0412bac35cff8b156d40bf788059603b5f775d27a03e618f" gracePeriod=600 Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.230353 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="2bb1893b4d187f4e0412bac35cff8b156d40bf788059603b5f775d27a03e618f" exitCode=0 Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.230412 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"2bb1893b4d187f4e0412bac35cff8b156d40bf788059603b5f775d27a03e618f"} Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.230948 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e"} Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.230968 4771 scope.go:117] "RemoveContainer" containerID="8ec7bdd2b1523ef360189a6e513ebd55e14c904c1125a25dd090c02b49acb28d" Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.231093 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j28p4" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="registry-server" containerID="cri-o://44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f" gracePeriod=2 Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.697724 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.739164 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-utilities\") pod \"db1b17e6-0ea6-4124-9a4a-e076586623e1\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.739433 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sxmp\" (UniqueName: \"kubernetes.io/projected/db1b17e6-0ea6-4124-9a4a-e076586623e1-kube-api-access-5sxmp\") pod \"db1b17e6-0ea6-4124-9a4a-e076586623e1\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.739565 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-catalog-content\") pod \"db1b17e6-0ea6-4124-9a4a-e076586623e1\" (UID: \"db1b17e6-0ea6-4124-9a4a-e076586623e1\") " Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.742640 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-utilities" (OuterVolumeSpecName: "utilities") pod "db1b17e6-0ea6-4124-9a4a-e076586623e1" (UID: "db1b17e6-0ea6-4124-9a4a-e076586623e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.749555 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1b17e6-0ea6-4124-9a4a-e076586623e1-kube-api-access-5sxmp" (OuterVolumeSpecName: "kube-api-access-5sxmp") pod "db1b17e6-0ea6-4124-9a4a-e076586623e1" (UID: "db1b17e6-0ea6-4124-9a4a-e076586623e1"). InnerVolumeSpecName "kube-api-access-5sxmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.842902 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.842945 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sxmp\" (UniqueName: \"kubernetes.io/projected/db1b17e6-0ea6-4124-9a4a-e076586623e1-kube-api-access-5sxmp\") on node \"crc\" DevicePath \"\"" Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.881185 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db1b17e6-0ea6-4124-9a4a-e076586623e1" (UID: "db1b17e6-0ea6-4124-9a4a-e076586623e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:53:15 crc kubenswrapper[4771]: I0129 09:53:15.947664 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1b17e6-0ea6-4124-9a4a-e076586623e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.242982 4771 generic.go:334] "Generic (PLEG): container finished" podID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerID="44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f" exitCode=0 Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.243054 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j28p4" event={"ID":"db1b17e6-0ea6-4124-9a4a-e076586623e1","Type":"ContainerDied","Data":"44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f"} Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.243089 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j28p4" event={"ID":"db1b17e6-0ea6-4124-9a4a-e076586623e1","Type":"ContainerDied","Data":"2e7c889f7986ae8a2ba487e7bcb5d0eb2ff9e0d6b88f075941332411b691e2ee"} Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.243109 4771 scope.go:117] "RemoveContainer" containerID="44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.243294 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j28p4" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.288511 4771 scope.go:117] "RemoveContainer" containerID="26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.295726 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j28p4"] Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.312977 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j28p4"] Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.322298 4771 scope.go:117] "RemoveContainer" containerID="8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.362864 4771 scope.go:117] "RemoveContainer" containerID="44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f" Jan 29 09:53:16 crc kubenswrapper[4771]: E0129 09:53:16.363220 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f\": container with ID starting with 44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f not found: ID does not exist" containerID="44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.363253 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f"} err="failed to get container status \"44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f\": rpc error: code = NotFound desc = could not find container \"44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f\": container with ID starting with 44194e9589feee750cd42e853abb1b55fb76a904bed63d07600286d235c3be0f not found: ID does not exist" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.363275 4771 scope.go:117] "RemoveContainer" containerID="26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6" Jan 29 09:53:16 crc kubenswrapper[4771]: E0129 09:53:16.363461 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6\": container with ID starting with 26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6 not found: ID does not exist" containerID="26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.363593 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6"} err="failed to get container status \"26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6\": rpc error: code = NotFound desc = could not find container \"26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6\": container with ID starting with 26d6adad48bd9ed47e98810050fcbd91e3fc718d10e6829fa8e4f981fe9fc5a6 not found: ID does not exist" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.363613 4771 scope.go:117] "RemoveContainer" containerID="8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63" Jan 29 09:53:16 crc kubenswrapper[4771]: E0129 09:53:16.363822 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63\": container with ID starting with 8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63 not found: ID does not exist" containerID="8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.363842 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63"} err="failed to get container status \"8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63\": rpc error: code = NotFound desc = could not find container \"8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63\": container with ID starting with 8687f63de1108cd222d7c635c9a908b261ae061e17e8eefb0e1f3565d72e0f63 not found: ID does not exist" Jan 29 09:53:16 crc kubenswrapper[4771]: I0129 09:53:16.849592 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" path="/var/lib/kubelet/pods/db1b17e6-0ea6-4124-9a4a-e076586623e1/volumes" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.200185 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fnqbx"] Jan 29 09:54:26 crc kubenswrapper[4771]: E0129 09:54:26.201197 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerName="registry-server" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.201212 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerName="registry-server" Jan 29 09:54:26 crc kubenswrapper[4771]: E0129 09:54:26.201220 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="extract-content" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.201226 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="extract-content" Jan 29 09:54:26 crc kubenswrapper[4771]: E0129 09:54:26.201240 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerName="extract-utilities" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.201247 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerName="extract-utilities" Jan 29 09:54:26 crc kubenswrapper[4771]: E0129 09:54:26.201258 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="registry-server" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.201264 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="registry-server" Jan 29 09:54:26 crc kubenswrapper[4771]: E0129 09:54:26.201282 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerName="extract-content" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.201290 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerName="extract-content" Jan 29 09:54:26 crc kubenswrapper[4771]: E0129 09:54:26.201312 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="extract-utilities" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.201321 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="extract-utilities" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.201542 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a26f0e76-1d80-4c56-966f-61c9bc46c85c" containerName="registry-server" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.201560 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1b17e6-0ea6-4124-9a4a-e076586623e1" containerName="registry-server" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.203050 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.211859 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnqbx"] Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.333277 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-catalog-content\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.333831 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zml6r\" (UniqueName: \"kubernetes.io/projected/ac206d13-c358-4b84-88c8-7f3b33447ee8-kube-api-access-zml6r\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.333885 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-utilities\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.436091 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-catalog-content\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.436190 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zml6r\" (UniqueName: \"kubernetes.io/projected/ac206d13-c358-4b84-88c8-7f3b33447ee8-kube-api-access-zml6r\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.436225 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-utilities\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.436932 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-utilities\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.437039 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-catalog-content\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.460338 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zml6r\" (UniqueName: \"kubernetes.io/projected/ac206d13-c358-4b84-88c8-7f3b33447ee8-kube-api-access-zml6r\") pod \"redhat-marketplace-fnqbx\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:26 crc kubenswrapper[4771]: I0129 09:54:26.528101 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:27 crc kubenswrapper[4771]: I0129 09:54:27.031265 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnqbx"] Jan 29 09:54:27 crc kubenswrapper[4771]: W0129 09:54:27.039604 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac206d13_c358_4b84_88c8_7f3b33447ee8.slice/crio-d92a458f33071b8fdf1f2dd183457e05557a8615ddedf7ca49459a7a8d2e6bb5 WatchSource:0}: Error finding container d92a458f33071b8fdf1f2dd183457e05557a8615ddedf7ca49459a7a8d2e6bb5: Status 404 returned error can't find the container with id d92a458f33071b8fdf1f2dd183457e05557a8615ddedf7ca49459a7a8d2e6bb5 Jan 29 09:54:27 crc kubenswrapper[4771]: I0129 09:54:27.350815 4771 generic.go:334] "Generic (PLEG): container finished" podID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerID="308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e" exitCode=0 Jan 29 09:54:27 crc kubenswrapper[4771]: I0129 09:54:27.351325 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnqbx" event={"ID":"ac206d13-c358-4b84-88c8-7f3b33447ee8","Type":"ContainerDied","Data":"308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e"} Jan 29 09:54:27 crc kubenswrapper[4771]: I0129 09:54:27.351357 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnqbx" event={"ID":"ac206d13-c358-4b84-88c8-7f3b33447ee8","Type":"ContainerStarted","Data":"d92a458f33071b8fdf1f2dd183457e05557a8615ddedf7ca49459a7a8d2e6bb5"} Jan 29 09:54:28 crc kubenswrapper[4771]: I0129 09:54:28.369530 4771 generic.go:334] "Generic (PLEG): container finished" podID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerID="b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e" exitCode=0 Jan 29 09:54:28 crc kubenswrapper[4771]: I0129 09:54:28.369600 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnqbx" event={"ID":"ac206d13-c358-4b84-88c8-7f3b33447ee8","Type":"ContainerDied","Data":"b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e"} Jan 29 09:54:29 crc kubenswrapper[4771]: I0129 09:54:29.383036 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnqbx" event={"ID":"ac206d13-c358-4b84-88c8-7f3b33447ee8","Type":"ContainerStarted","Data":"090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59"} Jan 29 09:54:29 crc kubenswrapper[4771]: I0129 09:54:29.399147 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fnqbx" podStartSLOduration=1.954786254 podStartE2EDuration="3.399126781s" podCreationTimestamp="2026-01-29 09:54:26 +0000 UTC" firstStartedPulling="2026-01-29 09:54:27.352788192 +0000 UTC m=+2887.475628439" lastFinishedPulling="2026-01-29 09:54:28.797128749 +0000 UTC m=+2888.919968966" observedRunningTime="2026-01-29 09:54:29.39838229 +0000 UTC m=+2889.521222567" watchObservedRunningTime="2026-01-29 09:54:29.399126781 +0000 UTC m=+2889.521967008" Jan 29 09:54:36 crc kubenswrapper[4771]: I0129 09:54:36.528972 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:36 crc kubenswrapper[4771]: I0129 09:54:36.529764 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:36 crc kubenswrapper[4771]: I0129 09:54:36.583131 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:37 crc kubenswrapper[4771]: I0129 09:54:37.497842 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:37 crc kubenswrapper[4771]: I0129 09:54:37.540952 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnqbx"] Jan 29 09:54:39 crc kubenswrapper[4771]: I0129 09:54:39.468046 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fnqbx" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerName="registry-server" containerID="cri-o://090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59" gracePeriod=2 Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.024302 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.109016 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zml6r\" (UniqueName: \"kubernetes.io/projected/ac206d13-c358-4b84-88c8-7f3b33447ee8-kube-api-access-zml6r\") pod \"ac206d13-c358-4b84-88c8-7f3b33447ee8\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.109167 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-catalog-content\") pod \"ac206d13-c358-4b84-88c8-7f3b33447ee8\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.109244 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-utilities\") pod \"ac206d13-c358-4b84-88c8-7f3b33447ee8\" (UID: \"ac206d13-c358-4b84-88c8-7f3b33447ee8\") " Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.110300 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-utilities" (OuterVolumeSpecName: "utilities") pod "ac206d13-c358-4b84-88c8-7f3b33447ee8" (UID: "ac206d13-c358-4b84-88c8-7f3b33447ee8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.116935 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac206d13-c358-4b84-88c8-7f3b33447ee8-kube-api-access-zml6r" (OuterVolumeSpecName: "kube-api-access-zml6r") pod "ac206d13-c358-4b84-88c8-7f3b33447ee8" (UID: "ac206d13-c358-4b84-88c8-7f3b33447ee8"). InnerVolumeSpecName "kube-api-access-zml6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.131106 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac206d13-c358-4b84-88c8-7f3b33447ee8" (UID: "ac206d13-c358-4b84-88c8-7f3b33447ee8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.211332 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zml6r\" (UniqueName: \"kubernetes.io/projected/ac206d13-c358-4b84-88c8-7f3b33447ee8-kube-api-access-zml6r\") on node \"crc\" DevicePath \"\"" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.211379 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.211392 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac206d13-c358-4b84-88c8-7f3b33447ee8-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.480237 4771 generic.go:334] "Generic (PLEG): container finished" podID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerID="090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59" exitCode=0 Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.480349 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnqbx" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.480350 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnqbx" event={"ID":"ac206d13-c358-4b84-88c8-7f3b33447ee8","Type":"ContainerDied","Data":"090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59"} Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.481760 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnqbx" event={"ID":"ac206d13-c358-4b84-88c8-7f3b33447ee8","Type":"ContainerDied","Data":"d92a458f33071b8fdf1f2dd183457e05557a8615ddedf7ca49459a7a8d2e6bb5"} Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.481787 4771 scope.go:117] "RemoveContainer" containerID="090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.502076 4771 scope.go:117] "RemoveContainer" containerID="b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.533751 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnqbx"] Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.547478 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnqbx"] Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.548395 4771 scope.go:117] "RemoveContainer" containerID="308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.589252 4771 scope.go:117] "RemoveContainer" containerID="090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59" Jan 29 09:54:40 crc kubenswrapper[4771]: E0129 09:54:40.590546 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59\": container with ID starting with 090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59 not found: ID does not exist" containerID="090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.590616 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59"} err="failed to get container status \"090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59\": rpc error: code = NotFound desc = could not find container \"090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59\": container with ID starting with 090b438959ca7e49689f55e8af4ae2c1c1acaaffbda6669680b5d9beef7abd59 not found: ID does not exist" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.590654 4771 scope.go:117] "RemoveContainer" containerID="b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e" Jan 29 09:54:40 crc kubenswrapper[4771]: E0129 09:54:40.591595 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e\": container with ID starting with b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e not found: ID does not exist" containerID="b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.591640 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e"} err="failed to get container status \"b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e\": rpc error: code = NotFound desc = could not find container \"b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e\": container with ID starting with b256976a12f9466ad5d1b8bb7a4236c065c13b5f9becd3adc663f965ccc3b62e not found: ID does not exist" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.591685 4771 scope.go:117] "RemoveContainer" containerID="308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e" Jan 29 09:54:40 crc kubenswrapper[4771]: E0129 09:54:40.592409 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e\": container with ID starting with 308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e not found: ID does not exist" containerID="308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.592461 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e"} err="failed to get container status \"308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e\": rpc error: code = NotFound desc = could not find container \"308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e\": container with ID starting with 308edba0e667f153cb53a01f4d0c82882f4625de21e62fa26bfc7bbe4da1494e not found: ID does not exist" Jan 29 09:54:40 crc kubenswrapper[4771]: I0129 09:54:40.858091 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" path="/var/lib/kubelet/pods/ac206d13-c358-4b84-88c8-7f3b33447ee8/volumes" Jan 29 09:55:14 crc kubenswrapper[4771]: I0129 09:55:14.271136 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:55:14 crc kubenswrapper[4771]: I0129 09:55:14.271731 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:55:44 crc kubenswrapper[4771]: I0129 09:55:44.271558 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:55:44 crc kubenswrapper[4771]: I0129 09:55:44.272111 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:56:14 crc kubenswrapper[4771]: I0129 09:56:14.272132 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:56:14 crc kubenswrapper[4771]: I0129 09:56:14.272644 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:56:14 crc kubenswrapper[4771]: I0129 09:56:14.272710 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 09:56:14 crc kubenswrapper[4771]: I0129 09:56:14.273405 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:56:14 crc kubenswrapper[4771]: I0129 09:56:14.273459 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" gracePeriod=600 Jan 29 09:56:14 crc kubenswrapper[4771]: E0129 09:56:14.431303 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:56:15 crc kubenswrapper[4771]: I0129 09:56:15.373942 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" exitCode=0 Jan 29 09:56:15 crc kubenswrapper[4771]: I0129 09:56:15.373984 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e"} Jan 29 09:56:15 crc kubenswrapper[4771]: I0129 09:56:15.374028 4771 scope.go:117] "RemoveContainer" containerID="2bb1893b4d187f4e0412bac35cff8b156d40bf788059603b5f775d27a03e618f" Jan 29 09:56:15 crc kubenswrapper[4771]: I0129 09:56:15.374520 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:56:15 crc kubenswrapper[4771]: E0129 09:56:15.374846 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:56:26 crc kubenswrapper[4771]: I0129 09:56:26.838399 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:56:26 crc kubenswrapper[4771]: E0129 09:56:26.839158 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:56:41 crc kubenswrapper[4771]: I0129 09:56:41.838933 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:56:41 crc kubenswrapper[4771]: E0129 09:56:41.840190 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:56:56 crc kubenswrapper[4771]: I0129 09:56:56.838183 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:56:56 crc kubenswrapper[4771]: E0129 09:56:56.838944 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:57:09 crc kubenswrapper[4771]: I0129 09:57:09.838306 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:57:09 crc kubenswrapper[4771]: E0129 09:57:09.840087 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:57:20 crc kubenswrapper[4771]: I0129 09:57:20.844414 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:57:20 crc kubenswrapper[4771]: E0129 09:57:20.860795 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:57:35 crc kubenswrapper[4771]: I0129 09:57:35.839330 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:57:35 crc kubenswrapper[4771]: E0129 09:57:35.840952 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.660860 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8rn5c"] Jan 29 09:57:38 crc kubenswrapper[4771]: E0129 09:57:38.661586 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerName="extract-utilities" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.661601 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerName="extract-utilities" Jan 29 09:57:38 crc kubenswrapper[4771]: E0129 09:57:38.661632 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerName="registry-server" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.661641 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerName="registry-server" Jan 29 09:57:38 crc kubenswrapper[4771]: E0129 09:57:38.661666 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerName="extract-content" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.661677 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerName="extract-content" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.661932 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac206d13-c358-4b84-88c8-7f3b33447ee8" containerName="registry-server" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.663536 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.671654 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rn5c"] Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.804406 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-utilities\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.804721 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l425x\" (UniqueName: \"kubernetes.io/projected/2f6ec3d5-9665-4150-bb65-84e968f1c222-kube-api-access-l425x\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.804770 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-catalog-content\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.906480 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l425x\" (UniqueName: \"kubernetes.io/projected/2f6ec3d5-9665-4150-bb65-84e968f1c222-kube-api-access-l425x\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.906527 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-catalog-content\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.906608 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-utilities\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.907268 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-utilities\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.907461 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-catalog-content\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:38 crc kubenswrapper[4771]: I0129 09:57:38.928784 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l425x\" (UniqueName: \"kubernetes.io/projected/2f6ec3d5-9665-4150-bb65-84e968f1c222-kube-api-access-l425x\") pod \"certified-operators-8rn5c\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:39 crc kubenswrapper[4771]: I0129 09:57:39.001927 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:39 crc kubenswrapper[4771]: I0129 09:57:39.479488 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rn5c"] Jan 29 09:57:40 crc kubenswrapper[4771]: I0129 09:57:40.191381 4771 generic.go:334] "Generic (PLEG): container finished" podID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerID="ce65f5e7f770895c592d0e9e21f3c205de3904bb5905f035515ef5f494a57a9f" exitCode=0 Jan 29 09:57:40 crc kubenswrapper[4771]: I0129 09:57:40.191454 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rn5c" event={"ID":"2f6ec3d5-9665-4150-bb65-84e968f1c222","Type":"ContainerDied","Data":"ce65f5e7f770895c592d0e9e21f3c205de3904bb5905f035515ef5f494a57a9f"} Jan 29 09:57:40 crc kubenswrapper[4771]: I0129 09:57:40.191796 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rn5c" event={"ID":"2f6ec3d5-9665-4150-bb65-84e968f1c222","Type":"ContainerStarted","Data":"c170112551c7750d0b43c952d0e21e93a3cbaad28df207599aafc74c12630bee"} Jan 29 09:57:40 crc kubenswrapper[4771]: I0129 09:57:40.194503 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:57:42 crc kubenswrapper[4771]: I0129 09:57:42.216993 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rn5c" event={"ID":"2f6ec3d5-9665-4150-bb65-84e968f1c222","Type":"ContainerStarted","Data":"3ec5601dd0a0678c6591a76ee906866fd892f70882f3af7b08e41d722890dacb"} Jan 29 09:57:45 crc kubenswrapper[4771]: I0129 09:57:45.252638 4771 generic.go:334] "Generic (PLEG): container finished" podID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerID="3ec5601dd0a0678c6591a76ee906866fd892f70882f3af7b08e41d722890dacb" exitCode=0 Jan 29 09:57:45 crc kubenswrapper[4771]: I0129 09:57:45.252740 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rn5c" event={"ID":"2f6ec3d5-9665-4150-bb65-84e968f1c222","Type":"ContainerDied","Data":"3ec5601dd0a0678c6591a76ee906866fd892f70882f3af7b08e41d722890dacb"} Jan 29 09:57:47 crc kubenswrapper[4771]: I0129 09:57:47.839434 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:57:47 crc kubenswrapper[4771]: E0129 09:57:47.840494 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:57:48 crc kubenswrapper[4771]: I0129 09:57:48.281490 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rn5c" event={"ID":"2f6ec3d5-9665-4150-bb65-84e968f1c222","Type":"ContainerStarted","Data":"326100a9f165d7368d26daeb587bcc2fe48947b1e2a9bee03e826ede5a7066f2"} Jan 29 09:57:48 crc kubenswrapper[4771]: I0129 09:57:48.318456 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8rn5c" podStartSLOduration=2.941912022 podStartE2EDuration="10.318436523s" podCreationTimestamp="2026-01-29 09:57:38 +0000 UTC" firstStartedPulling="2026-01-29 09:57:40.194119337 +0000 UTC m=+3080.316959604" lastFinishedPulling="2026-01-29 09:57:47.570643868 +0000 UTC m=+3087.693484105" observedRunningTime="2026-01-29 09:57:48.310162198 +0000 UTC m=+3088.433002425" watchObservedRunningTime="2026-01-29 09:57:48.318436523 +0000 UTC m=+3088.441276750" Jan 29 09:57:49 crc kubenswrapper[4771]: I0129 09:57:49.002978 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:49 crc kubenswrapper[4771]: I0129 09:57:49.003034 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:50 crc kubenswrapper[4771]: I0129 09:57:50.070804 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8rn5c" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="registry-server" probeResult="failure" output=< Jan 29 09:57:50 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 09:57:50 crc kubenswrapper[4771]: > Jan 29 09:57:59 crc kubenswrapper[4771]: I0129 09:57:59.054020 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:59 crc kubenswrapper[4771]: I0129 09:57:59.103465 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:57:59 crc kubenswrapper[4771]: I0129 09:57:59.290746 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rn5c"] Jan 29 09:58:00 crc kubenswrapper[4771]: I0129 09:58:00.389776 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8rn5c" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="registry-server" containerID="cri-o://326100a9f165d7368d26daeb587bcc2fe48947b1e2a9bee03e826ede5a7066f2" gracePeriod=2 Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.399141 4771 generic.go:334] "Generic (PLEG): container finished" podID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerID="326100a9f165d7368d26daeb587bcc2fe48947b1e2a9bee03e826ede5a7066f2" exitCode=0 Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.399212 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rn5c" event={"ID":"2f6ec3d5-9665-4150-bb65-84e968f1c222","Type":"ContainerDied","Data":"326100a9f165d7368d26daeb587bcc2fe48947b1e2a9bee03e826ede5a7066f2"} Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.399726 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rn5c" event={"ID":"2f6ec3d5-9665-4150-bb65-84e968f1c222","Type":"ContainerDied","Data":"c170112551c7750d0b43c952d0e21e93a3cbaad28df207599aafc74c12630bee"} Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.399746 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c170112551c7750d0b43c952d0e21e93a3cbaad28df207599aafc74c12630bee" Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.462144 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.584968 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l425x\" (UniqueName: \"kubernetes.io/projected/2f6ec3d5-9665-4150-bb65-84e968f1c222-kube-api-access-l425x\") pod \"2f6ec3d5-9665-4150-bb65-84e968f1c222\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.585110 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-utilities\") pod \"2f6ec3d5-9665-4150-bb65-84e968f1c222\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.585252 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-catalog-content\") pod \"2f6ec3d5-9665-4150-bb65-84e968f1c222\" (UID: \"2f6ec3d5-9665-4150-bb65-84e968f1c222\") " Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.593821 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-utilities" (OuterVolumeSpecName: "utilities") pod "2f6ec3d5-9665-4150-bb65-84e968f1c222" (UID: "2f6ec3d5-9665-4150-bb65-84e968f1c222"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.598242 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6ec3d5-9665-4150-bb65-84e968f1c222-kube-api-access-l425x" (OuterVolumeSpecName: "kube-api-access-l425x") pod "2f6ec3d5-9665-4150-bb65-84e968f1c222" (UID: "2f6ec3d5-9665-4150-bb65-84e968f1c222"). InnerVolumeSpecName "kube-api-access-l425x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.637893 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f6ec3d5-9665-4150-bb65-84e968f1c222" (UID: "2f6ec3d5-9665-4150-bb65-84e968f1c222"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.687272 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.687319 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l425x\" (UniqueName: \"kubernetes.io/projected/2f6ec3d5-9665-4150-bb65-84e968f1c222-kube-api-access-l425x\") on node \"crc\" DevicePath \"\"" Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.687342 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6ec3d5-9665-4150-bb65-84e968f1c222-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:58:01 crc kubenswrapper[4771]: I0129 09:58:01.838487 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:58:01 crc kubenswrapper[4771]: E0129 09:58:01.838772 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:58:02 crc kubenswrapper[4771]: I0129 09:58:02.406467 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rn5c" Jan 29 09:58:02 crc kubenswrapper[4771]: I0129 09:58:02.447901 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rn5c"] Jan 29 09:58:02 crc kubenswrapper[4771]: I0129 09:58:02.460399 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8rn5c"] Jan 29 09:58:02 crc kubenswrapper[4771]: I0129 09:58:02.847920 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" path="/var/lib/kubelet/pods/2f6ec3d5-9665-4150-bb65-84e968f1c222/volumes" Jan 29 09:58:13 crc kubenswrapper[4771]: I0129 09:58:13.839275 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:58:13 crc kubenswrapper[4771]: E0129 09:58:13.840380 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:58:24 crc kubenswrapper[4771]: I0129 09:58:24.838579 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:58:24 crc kubenswrapper[4771]: E0129 09:58:24.839312 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:58:36 crc kubenswrapper[4771]: I0129 09:58:36.838127 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:58:36 crc kubenswrapper[4771]: E0129 09:58:36.839604 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:58:47 crc kubenswrapper[4771]: I0129 09:58:47.838962 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:58:47 crc kubenswrapper[4771]: E0129 09:58:47.840990 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:58:58 crc kubenswrapper[4771]: I0129 09:58:58.838235 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:58:58 crc kubenswrapper[4771]: E0129 09:58:58.839240 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:59:11 crc kubenswrapper[4771]: I0129 09:59:11.838900 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:59:11 crc kubenswrapper[4771]: E0129 09:59:11.839793 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:59:24 crc kubenswrapper[4771]: I0129 09:59:24.838001 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:59:24 crc kubenswrapper[4771]: E0129 09:59:24.838758 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:59:37 crc kubenswrapper[4771]: I0129 09:59:37.838925 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:59:37 crc kubenswrapper[4771]: E0129 09:59:37.839999 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 09:59:51 crc kubenswrapper[4771]: I0129 09:59:51.838953 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 09:59:51 crc kubenswrapper[4771]: E0129 09:59:51.840299 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.156009 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95"] Jan 29 10:00:00 crc kubenswrapper[4771]: E0129 10:00:00.156956 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="extract-utilities" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.156972 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="extract-utilities" Jan 29 10:00:00 crc kubenswrapper[4771]: E0129 10:00:00.156983 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="extract-content" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.156989 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="extract-content" Jan 29 10:00:00 crc kubenswrapper[4771]: E0129 10:00:00.157011 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="registry-server" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.157017 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="registry-server" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.157233 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6ec3d5-9665-4150-bb65-84e968f1c222" containerName="registry-server" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.157935 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.217314 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.217613 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.246877 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95"] Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.351382 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-config-volume\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.351475 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-secret-volume\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.351554 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd84j\" (UniqueName: \"kubernetes.io/projected/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-kube-api-access-qd84j\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.455283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-config-volume\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.455373 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-secret-volume\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.455446 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd84j\" (UniqueName: \"kubernetes.io/projected/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-kube-api-access-qd84j\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.456810 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-config-volume\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.465767 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-secret-volume\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.473446 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd84j\" (UniqueName: \"kubernetes.io/projected/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-kube-api-access-qd84j\") pod \"collect-profiles-29494680-84q95\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.536848 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:00 crc kubenswrapper[4771]: I0129 10:00:00.968330 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95"] Jan 29 10:00:01 crc kubenswrapper[4771]: I0129 10:00:01.598572 4771 generic.go:334] "Generic (PLEG): container finished" podID="8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7" containerID="092130f08102acfef964c9ffd276c3c5fbf8322bc958d9e1e3e48956f5bef50f" exitCode=0 Jan 29 10:00:01 crc kubenswrapper[4771]: I0129 10:00:01.598634 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" event={"ID":"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7","Type":"ContainerDied","Data":"092130f08102acfef964c9ffd276c3c5fbf8322bc958d9e1e3e48956f5bef50f"} Jan 29 10:00:01 crc kubenswrapper[4771]: I0129 10:00:01.598891 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" event={"ID":"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7","Type":"ContainerStarted","Data":"d5c8ac09d41abfd132faa33c275178a73d067acc42d10c334dac04068549049c"} Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.032896 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.112074 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-config-volume\") pod \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.112292 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd84j\" (UniqueName: \"kubernetes.io/projected/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-kube-api-access-qd84j\") pod \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.112468 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-secret-volume\") pod \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\" (UID: \"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7\") " Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.113047 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7" (UID: "8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.113804 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.118837 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-kube-api-access-qd84j" (OuterVolumeSpecName: "kube-api-access-qd84j") pod "8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7" (UID: "8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7"). InnerVolumeSpecName "kube-api-access-qd84j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.120846 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7" (UID: "8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.216151 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd84j\" (UniqueName: \"kubernetes.io/projected/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-kube-api-access-qd84j\") on node \"crc\" DevicePath \"\"" Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.216194 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.617435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" event={"ID":"8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7","Type":"ContainerDied","Data":"d5c8ac09d41abfd132faa33c275178a73d067acc42d10c334dac04068549049c"} Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.617752 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c8ac09d41abfd132faa33c275178a73d067acc42d10c334dac04068549049c" Jan 29 10:00:03 crc kubenswrapper[4771]: I0129 10:00:03.617637 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494680-84q95" Jan 29 10:00:04 crc kubenswrapper[4771]: I0129 10:00:04.111574 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h"] Jan 29 10:00:04 crc kubenswrapper[4771]: I0129 10:00:04.120960 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-vtc4h"] Jan 29 10:00:04 crc kubenswrapper[4771]: I0129 10:00:04.840563 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 10:00:04 crc kubenswrapper[4771]: E0129 10:00:04.841534 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:00:04 crc kubenswrapper[4771]: I0129 10:00:04.853307 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941e9ea1-4df6-4b0c-937b-af75139aeb0f" path="/var/lib/kubelet/pods/941e9ea1-4df6-4b0c-937b-af75139aeb0f/volumes" Jan 29 10:00:15 crc kubenswrapper[4771]: I0129 10:00:15.838754 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 10:00:15 crc kubenswrapper[4771]: E0129 10:00:15.840023 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:00:28 crc kubenswrapper[4771]: I0129 10:00:28.838089 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 10:00:28 crc kubenswrapper[4771]: E0129 10:00:28.839100 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:00:43 crc kubenswrapper[4771]: I0129 10:00:43.838331 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 10:00:43 crc kubenswrapper[4771]: E0129 10:00:43.839071 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:00:45 crc kubenswrapper[4771]: I0129 10:00:45.998017 4771 scope.go:117] "RemoveContainer" containerID="ed76971758709e75637c869d09158601251e108a263b6e528702706bc6820b78" Jan 29 10:00:56 crc kubenswrapper[4771]: I0129 10:00:56.838483 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 10:00:56 crc kubenswrapper[4771]: E0129 10:00:56.839509 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.152768 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29494681-2s5hx"] Jan 29 10:01:00 crc kubenswrapper[4771]: E0129 10:01:00.153816 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7" containerName="collect-profiles" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.153833 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7" containerName="collect-profiles" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.154124 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3d455d-084a-4e3d-9b3e-02ac99cbf3d7" containerName="collect-profiles" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.154902 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.178593 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494681-2s5hx"] Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.228323 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97rzj\" (UniqueName: \"kubernetes.io/projected/c0370c10-53ad-4d77-8869-f5c727a41d8c-kube-api-access-97rzj\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.228713 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-config-data\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.228784 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-fernet-keys\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.228846 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-combined-ca-bundle\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.330627 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97rzj\" (UniqueName: \"kubernetes.io/projected/c0370c10-53ad-4d77-8869-f5c727a41d8c-kube-api-access-97rzj\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.330754 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-config-data\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.330839 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-fernet-keys\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.330901 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-combined-ca-bundle\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.338241 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-combined-ca-bundle\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.342272 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-config-data\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.343123 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-fernet-keys\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.356822 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97rzj\" (UniqueName: \"kubernetes.io/projected/c0370c10-53ad-4d77-8869-f5c727a41d8c-kube-api-access-97rzj\") pod \"keystone-cron-29494681-2s5hx\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.490516 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:00 crc kubenswrapper[4771]: I0129 10:01:00.993669 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494681-2s5hx"] Jan 29 10:01:01 crc kubenswrapper[4771]: I0129 10:01:01.118849 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494681-2s5hx" event={"ID":"c0370c10-53ad-4d77-8869-f5c727a41d8c","Type":"ContainerStarted","Data":"89521021690b406fc2789f15cfa927ee540ca1641d59f9d0ac23c8b4215cf8e7"} Jan 29 10:01:02 crc kubenswrapper[4771]: I0129 10:01:02.127825 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494681-2s5hx" event={"ID":"c0370c10-53ad-4d77-8869-f5c727a41d8c","Type":"ContainerStarted","Data":"e8a77e46336f55af1672b0f19d62b8b3b30c507f9712f241b63969b6725a8916"} Jan 29 10:01:02 crc kubenswrapper[4771]: I0129 10:01:02.147753 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29494681-2s5hx" podStartSLOduration=2.14773857 podStartE2EDuration="2.14773857s" podCreationTimestamp="2026-01-29 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 10:01:02.144467911 +0000 UTC m=+3282.267308138" watchObservedRunningTime="2026-01-29 10:01:02.14773857 +0000 UTC m=+3282.270578797" Jan 29 10:01:04 crc kubenswrapper[4771]: I0129 10:01:04.148572 4771 generic.go:334] "Generic (PLEG): container finished" podID="c0370c10-53ad-4d77-8869-f5c727a41d8c" containerID="e8a77e46336f55af1672b0f19d62b8b3b30c507f9712f241b63969b6725a8916" exitCode=0 Jan 29 10:01:04 crc kubenswrapper[4771]: I0129 10:01:04.148618 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494681-2s5hx" event={"ID":"c0370c10-53ad-4d77-8869-f5c727a41d8c","Type":"ContainerDied","Data":"e8a77e46336f55af1672b0f19d62b8b3b30c507f9712f241b63969b6725a8916"} Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.479982 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.643228 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-fernet-keys\") pod \"c0370c10-53ad-4d77-8869-f5c727a41d8c\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.643330 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97rzj\" (UniqueName: \"kubernetes.io/projected/c0370c10-53ad-4d77-8869-f5c727a41d8c-kube-api-access-97rzj\") pod \"c0370c10-53ad-4d77-8869-f5c727a41d8c\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.643516 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-config-data\") pod \"c0370c10-53ad-4d77-8869-f5c727a41d8c\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.643566 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-combined-ca-bundle\") pod \"c0370c10-53ad-4d77-8869-f5c727a41d8c\" (UID: \"c0370c10-53ad-4d77-8869-f5c727a41d8c\") " Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.649830 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0370c10-53ad-4d77-8869-f5c727a41d8c-kube-api-access-97rzj" (OuterVolumeSpecName: "kube-api-access-97rzj") pod "c0370c10-53ad-4d77-8869-f5c727a41d8c" (UID: "c0370c10-53ad-4d77-8869-f5c727a41d8c"). InnerVolumeSpecName "kube-api-access-97rzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.656944 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c0370c10-53ad-4d77-8869-f5c727a41d8c" (UID: "c0370c10-53ad-4d77-8869-f5c727a41d8c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.672047 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0370c10-53ad-4d77-8869-f5c727a41d8c" (UID: "c0370c10-53ad-4d77-8869-f5c727a41d8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.700575 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-config-data" (OuterVolumeSpecName: "config-data") pod "c0370c10-53ad-4d77-8869-f5c727a41d8c" (UID: "c0370c10-53ad-4d77-8869-f5c727a41d8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.746050 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.746284 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.746351 4771 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0370c10-53ad-4d77-8869-f5c727a41d8c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 10:01:05 crc kubenswrapper[4771]: I0129 10:01:05.746418 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97rzj\" (UniqueName: \"kubernetes.io/projected/c0370c10-53ad-4d77-8869-f5c727a41d8c-kube-api-access-97rzj\") on node \"crc\" DevicePath \"\"" Jan 29 10:01:06 crc kubenswrapper[4771]: I0129 10:01:06.170327 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494681-2s5hx" event={"ID":"c0370c10-53ad-4d77-8869-f5c727a41d8c","Type":"ContainerDied","Data":"89521021690b406fc2789f15cfa927ee540ca1641d59f9d0ac23c8b4215cf8e7"} Jan 29 10:01:06 crc kubenswrapper[4771]: I0129 10:01:06.170389 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494681-2s5hx" Jan 29 10:01:06 crc kubenswrapper[4771]: I0129 10:01:06.170395 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89521021690b406fc2789f15cfa927ee540ca1641d59f9d0ac23c8b4215cf8e7" Jan 29 10:01:11 crc kubenswrapper[4771]: I0129 10:01:11.838604 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 10:01:11 crc kubenswrapper[4771]: E0129 10:01:11.840088 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:01:26 crc kubenswrapper[4771]: I0129 10:01:26.838789 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 10:01:27 crc kubenswrapper[4771]: I0129 10:01:27.438477 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"d5f3cde7556a6783366568c5944ce7a8a6bb28d47d7f31caf918af623e2be39a"} Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.360103 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-blzcd"] Jan 29 10:03:18 crc kubenswrapper[4771]: E0129 10:03:18.361182 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0370c10-53ad-4d77-8869-f5c727a41d8c" containerName="keystone-cron" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.361197 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0370c10-53ad-4d77-8869-f5c727a41d8c" containerName="keystone-cron" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.361372 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0370c10-53ad-4d77-8869-f5c727a41d8c" containerName="keystone-cron" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.362733 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.371807 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blzcd"] Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.375377 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-catalog-content\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.375443 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-utilities\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.375724 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmkh\" (UniqueName: \"kubernetes.io/projected/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-kube-api-access-pfmkh\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.478679 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmkh\" (UniqueName: \"kubernetes.io/projected/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-kube-api-access-pfmkh\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.478843 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-catalog-content\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.478873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-utilities\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.479504 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-utilities\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.480222 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-catalog-content\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.502095 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmkh\" (UniqueName: \"kubernetes.io/projected/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-kube-api-access-pfmkh\") pod \"community-operators-blzcd\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:18 crc kubenswrapper[4771]: I0129 10:03:18.729875 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:19 crc kubenswrapper[4771]: I0129 10:03:19.180841 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blzcd"] Jan 29 10:03:19 crc kubenswrapper[4771]: I0129 10:03:19.648302 4771 generic.go:334] "Generic (PLEG): container finished" podID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerID="866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1" exitCode=0 Jan 29 10:03:19 crc kubenswrapper[4771]: I0129 10:03:19.648348 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blzcd" event={"ID":"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727","Type":"ContainerDied","Data":"866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1"} Jan 29 10:03:19 crc kubenswrapper[4771]: I0129 10:03:19.648374 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blzcd" event={"ID":"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727","Type":"ContainerStarted","Data":"3a52a34887bdac3d7bc11eee518d0b18607a7ab7e68c29ddb98c7eec1c337535"} Jan 29 10:03:19 crc kubenswrapper[4771]: I0129 10:03:19.651192 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 10:03:21 crc kubenswrapper[4771]: I0129 10:03:21.673169 4771 generic.go:334] "Generic (PLEG): container finished" podID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerID="9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef" exitCode=0 Jan 29 10:03:21 crc kubenswrapper[4771]: I0129 10:03:21.673277 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blzcd" event={"ID":"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727","Type":"ContainerDied","Data":"9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef"} Jan 29 10:03:22 crc kubenswrapper[4771]: I0129 10:03:22.684821 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blzcd" event={"ID":"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727","Type":"ContainerStarted","Data":"b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0"} Jan 29 10:03:22 crc kubenswrapper[4771]: I0129 10:03:22.703613 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-blzcd" podStartSLOduration=2.226432277 podStartE2EDuration="4.703592892s" podCreationTimestamp="2026-01-29 10:03:18 +0000 UTC" firstStartedPulling="2026-01-29 10:03:19.650915602 +0000 UTC m=+3419.773755829" lastFinishedPulling="2026-01-29 10:03:22.128076197 +0000 UTC m=+3422.250916444" observedRunningTime="2026-01-29 10:03:22.70166735 +0000 UTC m=+3422.824507617" watchObservedRunningTime="2026-01-29 10:03:22.703592892 +0000 UTC m=+3422.826433119" Jan 29 10:03:28 crc kubenswrapper[4771]: I0129 10:03:28.730745 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:28 crc kubenswrapper[4771]: I0129 10:03:28.731072 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:29 crc kubenswrapper[4771]: I0129 10:03:29.788146 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-blzcd" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="registry-server" probeResult="failure" output=< Jan 29 10:03:29 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 10:03:29 crc kubenswrapper[4771]: > Jan 29 10:03:38 crc kubenswrapper[4771]: I0129 10:03:38.774851 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:38 crc kubenswrapper[4771]: I0129 10:03:38.823651 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:39 crc kubenswrapper[4771]: I0129 10:03:39.018058 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blzcd"] Jan 29 10:03:39 crc kubenswrapper[4771]: I0129 10:03:39.862791 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-blzcd" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="registry-server" containerID="cri-o://b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0" gracePeriod=2 Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.474075 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.576507 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-utilities\") pod \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.576588 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfmkh\" (UniqueName: \"kubernetes.io/projected/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-kube-api-access-pfmkh\") pod \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.576620 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-catalog-content\") pod \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\" (UID: \"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727\") " Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.577295 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-utilities" (OuterVolumeSpecName: "utilities") pod "7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" (UID: "7ceb2ae4-3d23-4fee-8bec-fd5f69c31727"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.589982 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-kube-api-access-pfmkh" (OuterVolumeSpecName: "kube-api-access-pfmkh") pod "7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" (UID: "7ceb2ae4-3d23-4fee-8bec-fd5f69c31727"). InnerVolumeSpecName "kube-api-access-pfmkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.669033 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" (UID: "7ceb2ae4-3d23-4fee-8bec-fd5f69c31727"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.678870 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.678905 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfmkh\" (UniqueName: \"kubernetes.io/projected/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-kube-api-access-pfmkh\") on node \"crc\" DevicePath \"\"" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.678917 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.873159 4771 generic.go:334] "Generic (PLEG): container finished" podID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerID="b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0" exitCode=0 Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.873213 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blzcd" event={"ID":"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727","Type":"ContainerDied","Data":"b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0"} Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.873247 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blzcd" event={"ID":"7ceb2ae4-3d23-4fee-8bec-fd5f69c31727","Type":"ContainerDied","Data":"3a52a34887bdac3d7bc11eee518d0b18607a7ab7e68c29ddb98c7eec1c337535"} Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.873268 4771 scope.go:117] "RemoveContainer" containerID="b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.873433 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blzcd" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.900221 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blzcd"] Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.909181 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-blzcd"] Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.913299 4771 scope.go:117] "RemoveContainer" containerID="9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.934534 4771 scope.go:117] "RemoveContainer" containerID="866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.983538 4771 scope.go:117] "RemoveContainer" containerID="b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0" Jan 29 10:03:40 crc kubenswrapper[4771]: E0129 10:03:40.984154 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0\": container with ID starting with b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0 not found: ID does not exist" containerID="b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.984200 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0"} err="failed to get container status \"b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0\": rpc error: code = NotFound desc = could not find container \"b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0\": container with ID starting with b30cebda5409050ee304e572b63ba0961be0eae37eaa77003e8a2c32444fc0b0 not found: ID does not exist" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.984225 4771 scope.go:117] "RemoveContainer" containerID="9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef" Jan 29 10:03:40 crc kubenswrapper[4771]: E0129 10:03:40.984515 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef\": container with ID starting with 9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef not found: ID does not exist" containerID="9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.984550 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef"} err="failed to get container status \"9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef\": rpc error: code = NotFound desc = could not find container \"9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef\": container with ID starting with 9443a0904109e3eab030fc739aa3d3bae6448e3786a634cceb5b1bb5d152e0ef not found: ID does not exist" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.984572 4771 scope.go:117] "RemoveContainer" containerID="866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1" Jan 29 10:03:40 crc kubenswrapper[4771]: E0129 10:03:40.984821 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1\": container with ID starting with 866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1 not found: ID does not exist" containerID="866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1" Jan 29 10:03:40 crc kubenswrapper[4771]: I0129 10:03:40.984848 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1"} err="failed to get container status \"866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1\": rpc error: code = NotFound desc = could not find container \"866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1\": container with ID starting with 866f1ac4b053055c8ed66d2c77028de57ff562818de523d23fe5370b07b71fa1 not found: ID does not exist" Jan 29 10:03:42 crc kubenswrapper[4771]: I0129 10:03:42.862291 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" path="/var/lib/kubelet/pods/7ceb2ae4-3d23-4fee-8bec-fd5f69c31727/volumes" Jan 29 10:03:44 crc kubenswrapper[4771]: I0129 10:03:44.270870 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:03:44 crc kubenswrapper[4771]: I0129 10:03:44.270913 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:03:46 crc kubenswrapper[4771]: I0129 10:03:46.109972 4771 scope.go:117] "RemoveContainer" containerID="3ec5601dd0a0678c6591a76ee906866fd892f70882f3af7b08e41d722890dacb" Jan 29 10:03:46 crc kubenswrapper[4771]: I0129 10:03:46.141026 4771 scope.go:117] "RemoveContainer" containerID="ce65f5e7f770895c592d0e9e21f3c205de3904bb5905f035515ef5f494a57a9f" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.804965 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bkrc"] Jan 29 10:04:07 crc kubenswrapper[4771]: E0129 10:04:07.806819 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="registry-server" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.806848 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="registry-server" Jan 29 10:04:07 crc kubenswrapper[4771]: E0129 10:04:07.806920 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="extract-content" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.806934 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="extract-content" Jan 29 10:04:07 crc kubenswrapper[4771]: E0129 10:04:07.806978 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="extract-utilities" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.806992 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="extract-utilities" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.807402 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ceb2ae4-3d23-4fee-8bec-fd5f69c31727" containerName="registry-server" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.809591 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.816742 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bkrc"] Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.949067 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-catalog-content\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.949135 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvxs\" (UniqueName: \"kubernetes.io/projected/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-kube-api-access-pmvxs\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:07 crc kubenswrapper[4771]: I0129 10:04:07.949170 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-utilities\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:08 crc kubenswrapper[4771]: I0129 10:04:08.052198 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-catalog-content\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:08 crc kubenswrapper[4771]: I0129 10:04:08.052335 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmvxs\" (UniqueName: \"kubernetes.io/projected/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-kube-api-access-pmvxs\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:08 crc kubenswrapper[4771]: I0129 10:04:08.052400 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-utilities\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:08 crc kubenswrapper[4771]: I0129 10:04:08.052969 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-catalog-content\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:08 crc kubenswrapper[4771]: I0129 10:04:08.053299 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-utilities\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:08 crc kubenswrapper[4771]: I0129 10:04:08.073497 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmvxs\" (UniqueName: \"kubernetes.io/projected/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-kube-api-access-pmvxs\") pod \"redhat-operators-4bkrc\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:08 crc kubenswrapper[4771]: I0129 10:04:08.139679 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:08 crc kubenswrapper[4771]: I0129 10:04:08.606347 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bkrc"] Jan 29 10:04:09 crc kubenswrapper[4771]: I0129 10:04:09.171184 4771 generic.go:334] "Generic (PLEG): container finished" podID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerID="6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8" exitCode=0 Jan 29 10:04:09 crc kubenswrapper[4771]: I0129 10:04:09.171839 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bkrc" event={"ID":"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a","Type":"ContainerDied","Data":"6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8"} Jan 29 10:04:09 crc kubenswrapper[4771]: I0129 10:04:09.172349 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bkrc" event={"ID":"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a","Type":"ContainerStarted","Data":"df03d13683f4c1ce9c7344f9631265618fa22025c5cc830be87c29031d181826"} Jan 29 10:04:11 crc kubenswrapper[4771]: I0129 10:04:11.198680 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bkrc" event={"ID":"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a","Type":"ContainerStarted","Data":"eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7"} Jan 29 10:04:12 crc kubenswrapper[4771]: I0129 10:04:12.211157 4771 generic.go:334] "Generic (PLEG): container finished" podID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerID="eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7" exitCode=0 Jan 29 10:04:12 crc kubenswrapper[4771]: I0129 10:04:12.211391 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bkrc" event={"ID":"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a","Type":"ContainerDied","Data":"eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7"} Jan 29 10:04:14 crc kubenswrapper[4771]: I0129 10:04:14.235919 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bkrc" event={"ID":"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a","Type":"ContainerStarted","Data":"4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f"} Jan 29 10:04:14 crc kubenswrapper[4771]: I0129 10:04:14.257942 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bkrc" podStartSLOduration=2.579669135 podStartE2EDuration="7.257925556s" podCreationTimestamp="2026-01-29 10:04:07 +0000 UTC" firstStartedPulling="2026-01-29 10:04:09.174174883 +0000 UTC m=+3469.297015110" lastFinishedPulling="2026-01-29 10:04:13.852431304 +0000 UTC m=+3473.975271531" observedRunningTime="2026-01-29 10:04:14.257897825 +0000 UTC m=+3474.380738052" watchObservedRunningTime="2026-01-29 10:04:14.257925556 +0000 UTC m=+3474.380765783" Jan 29 10:04:14 crc kubenswrapper[4771]: I0129 10:04:14.271277 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:04:14 crc kubenswrapper[4771]: I0129 10:04:14.271331 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:04:18 crc kubenswrapper[4771]: I0129 10:04:18.140955 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:18 crc kubenswrapper[4771]: I0129 10:04:18.142200 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:19 crc kubenswrapper[4771]: I0129 10:04:19.326434 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bkrc" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="registry-server" probeResult="failure" output=< Jan 29 10:04:19 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 29 10:04:19 crc kubenswrapper[4771]: > Jan 29 10:04:28 crc kubenswrapper[4771]: I0129 10:04:28.185647 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:28 crc kubenswrapper[4771]: I0129 10:04:28.235200 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:28 crc kubenswrapper[4771]: I0129 10:04:28.430706 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bkrc"] Jan 29 10:04:29 crc kubenswrapper[4771]: I0129 10:04:29.430371 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bkrc" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="registry-server" containerID="cri-o://4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f" gracePeriod=2 Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.038141 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.162410 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-catalog-content\") pod \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.162464 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmvxs\" (UniqueName: \"kubernetes.io/projected/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-kube-api-access-pmvxs\") pod \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.162600 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-utilities\") pod \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\" (UID: \"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a\") " Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.163971 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-utilities" (OuterVolumeSpecName: "utilities") pod "fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" (UID: "fca79b11-d0f1-4633-926f-e2d9d1e5ab5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.176442 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-kube-api-access-pmvxs" (OuterVolumeSpecName: "kube-api-access-pmvxs") pod "fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" (UID: "fca79b11-d0f1-4633-926f-e2d9d1e5ab5a"). InnerVolumeSpecName "kube-api-access-pmvxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.265247 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmvxs\" (UniqueName: \"kubernetes.io/projected/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-kube-api-access-pmvxs\") on node \"crc\" DevicePath \"\"" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.265287 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.329723 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" (UID: "fca79b11-d0f1-4633-926f-e2d9d1e5ab5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.368362 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.448288 4771 generic.go:334] "Generic (PLEG): container finished" podID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerID="4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f" exitCode=0 Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.448366 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bkrc" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.448377 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bkrc" event={"ID":"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a","Type":"ContainerDied","Data":"4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f"} Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.448439 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bkrc" event={"ID":"fca79b11-d0f1-4633-926f-e2d9d1e5ab5a","Type":"ContainerDied","Data":"df03d13683f4c1ce9c7344f9631265618fa22025c5cc830be87c29031d181826"} Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.448475 4771 scope.go:117] "RemoveContainer" containerID="4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.490538 4771 scope.go:117] "RemoveContainer" containerID="eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.493832 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bkrc"] Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.505223 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bkrc"] Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.526563 4771 scope.go:117] "RemoveContainer" containerID="6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.589303 4771 scope.go:117] "RemoveContainer" containerID="4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f" Jan 29 10:04:30 crc kubenswrapper[4771]: E0129 10:04:30.589810 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f\": container with ID starting with 4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f not found: ID does not exist" containerID="4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.589865 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f"} err="failed to get container status \"4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f\": rpc error: code = NotFound desc = could not find container \"4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f\": container with ID starting with 4bc7808fa5c578a7be610883bed7fa74069738e56af0b9cf40519b243b01b57f not found: ID does not exist" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.589896 4771 scope.go:117] "RemoveContainer" containerID="eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7" Jan 29 10:04:30 crc kubenswrapper[4771]: E0129 10:04:30.590281 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7\": container with ID starting with eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7 not found: ID does not exist" containerID="eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.590321 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7"} err="failed to get container status \"eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7\": rpc error: code = NotFound desc = could not find container \"eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7\": container with ID starting with eb829569af2dfbaa019d61a49bfd959c897fe344fcdbb2fae3a8e3d1663cabc7 not found: ID does not exist" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.590347 4771 scope.go:117] "RemoveContainer" containerID="6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8" Jan 29 10:04:30 crc kubenswrapper[4771]: E0129 10:04:30.590625 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8\": container with ID starting with 6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8 not found: ID does not exist" containerID="6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.590662 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8"} err="failed to get container status \"6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8\": rpc error: code = NotFound desc = could not find container \"6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8\": container with ID starting with 6fc6eb55a1c4d59d9fcf16e6cee0905a8afc2ce70ec1768aae239b10ee8080e8 not found: ID does not exist" Jan 29 10:04:30 crc kubenswrapper[4771]: I0129 10:04:30.855799 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" path="/var/lib/kubelet/pods/fca79b11-d0f1-4633-926f-e2d9d1e5ab5a/volumes" Jan 29 10:04:44 crc kubenswrapper[4771]: I0129 10:04:44.271787 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:04:44 crc kubenswrapper[4771]: I0129 10:04:44.272550 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:04:44 crc kubenswrapper[4771]: I0129 10:04:44.272972 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 10:04:44 crc kubenswrapper[4771]: I0129 10:04:44.274543 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5f3cde7556a6783366568c5944ce7a8a6bb28d47d7f31caf918af623e2be39a"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 10:04:44 crc kubenswrapper[4771]: I0129 10:04:44.274655 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://d5f3cde7556a6783366568c5944ce7a8a6bb28d47d7f31caf918af623e2be39a" gracePeriod=600 Jan 29 10:04:44 crc kubenswrapper[4771]: I0129 10:04:44.592509 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="d5f3cde7556a6783366568c5944ce7a8a6bb28d47d7f31caf918af623e2be39a" exitCode=0 Jan 29 10:04:44 crc kubenswrapper[4771]: I0129 10:04:44.592548 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"d5f3cde7556a6783366568c5944ce7a8a6bb28d47d7f31caf918af623e2be39a"} Jan 29 10:04:44 crc kubenswrapper[4771]: I0129 10:04:44.592968 4771 scope.go:117] "RemoveContainer" containerID="a36275c1309ef0d80f48e1ec632bc17d4d1d5968cfde95ffdb95ede2d2f11d8e" Jan 29 10:04:45 crc kubenswrapper[4771]: I0129 10:04:45.602587 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba"} Jan 29 10:04:46 crc kubenswrapper[4771]: I0129 10:04:46.214789 4771 scope.go:117] "RemoveContainer" containerID="326100a9f165d7368d26daeb587bcc2fe48947b1e2a9bee03e826ede5a7066f2" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.123507 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lgbdf"] Jan 29 10:04:55 crc kubenswrapper[4771]: E0129 10:04:55.125992 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="extract-utilities" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.126163 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="extract-utilities" Jan 29 10:04:55 crc kubenswrapper[4771]: E0129 10:04:55.126328 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="extract-content" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.126452 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="extract-content" Jan 29 10:04:55 crc kubenswrapper[4771]: E0129 10:04:55.126585 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="registry-server" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.126752 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="registry-server" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.127247 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca79b11-d0f1-4633-926f-e2d9d1e5ab5a" containerName="registry-server" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.131435 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.136395 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgbdf"] Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.182735 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxhf\" (UniqueName: \"kubernetes.io/projected/0c41a377-0f14-453d-9130-5d88e4912ae4-kube-api-access-mdxhf\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.182877 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-catalog-content\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.183048 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-utilities\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.285024 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-utilities\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.285097 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxhf\" (UniqueName: \"kubernetes.io/projected/0c41a377-0f14-453d-9130-5d88e4912ae4-kube-api-access-mdxhf\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.285168 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-catalog-content\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.285675 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-catalog-content\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.285898 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-utilities\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.309163 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxhf\" (UniqueName: \"kubernetes.io/projected/0c41a377-0f14-453d-9130-5d88e4912ae4-kube-api-access-mdxhf\") pod \"redhat-marketplace-lgbdf\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.460583 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:04:55 crc kubenswrapper[4771]: W0129 10:04:55.951631 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c41a377_0f14_453d_9130_5d88e4912ae4.slice/crio-82d5d2763537574ed043c7b370eb55117462a441c4e885c3b69c4a7d0d209580 WatchSource:0}: Error finding container 82d5d2763537574ed043c7b370eb55117462a441c4e885c3b69c4a7d0d209580: Status 404 returned error can't find the container with id 82d5d2763537574ed043c7b370eb55117462a441c4e885c3b69c4a7d0d209580 Jan 29 10:04:55 crc kubenswrapper[4771]: I0129 10:04:55.953410 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgbdf"] Jan 29 10:04:56 crc kubenswrapper[4771]: I0129 10:04:56.712211 4771 generic.go:334] "Generic (PLEG): container finished" podID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerID="2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c" exitCode=0 Jan 29 10:04:56 crc kubenswrapper[4771]: I0129 10:04:56.712522 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgbdf" event={"ID":"0c41a377-0f14-453d-9130-5d88e4912ae4","Type":"ContainerDied","Data":"2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c"} Jan 29 10:04:56 crc kubenswrapper[4771]: I0129 10:04:56.712546 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgbdf" event={"ID":"0c41a377-0f14-453d-9130-5d88e4912ae4","Type":"ContainerStarted","Data":"82d5d2763537574ed043c7b370eb55117462a441c4e885c3b69c4a7d0d209580"} Jan 29 10:04:57 crc kubenswrapper[4771]: I0129 10:04:57.724046 4771 generic.go:334] "Generic (PLEG): container finished" podID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerID="eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56" exitCode=0 Jan 29 10:04:57 crc kubenswrapper[4771]: I0129 10:04:57.724121 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgbdf" event={"ID":"0c41a377-0f14-453d-9130-5d88e4912ae4","Type":"ContainerDied","Data":"eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56"} Jan 29 10:04:58 crc kubenswrapper[4771]: I0129 10:04:58.733542 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgbdf" event={"ID":"0c41a377-0f14-453d-9130-5d88e4912ae4","Type":"ContainerStarted","Data":"e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb"} Jan 29 10:04:58 crc kubenswrapper[4771]: I0129 10:04:58.756901 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lgbdf" podStartSLOduration=2.15616722 podStartE2EDuration="3.756879243s" podCreationTimestamp="2026-01-29 10:04:55 +0000 UTC" firstStartedPulling="2026-01-29 10:04:56.714523756 +0000 UTC m=+3516.837363983" lastFinishedPulling="2026-01-29 10:04:58.315235769 +0000 UTC m=+3518.438076006" observedRunningTime="2026-01-29 10:04:58.753798199 +0000 UTC m=+3518.876638446" watchObservedRunningTime="2026-01-29 10:04:58.756879243 +0000 UTC m=+3518.879719470" Jan 29 10:05:05 crc kubenswrapper[4771]: I0129 10:05:05.461625 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:05:05 crc kubenswrapper[4771]: I0129 10:05:05.463074 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:05:05 crc kubenswrapper[4771]: I0129 10:05:05.511161 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:05:05 crc kubenswrapper[4771]: I0129 10:05:05.800132 4771 generic.go:334] "Generic (PLEG): container finished" podID="be095875-5658-44fb-9c4b-90d1bc093cf3" containerID="6256b01082efafe6d0b5400478058c6e17a8c1b78df5e409ecb00514b748569a" exitCode=0 Jan 29 10:05:05 crc kubenswrapper[4771]: I0129 10:05:05.800265 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"be095875-5658-44fb-9c4b-90d1bc093cf3","Type":"ContainerDied","Data":"6256b01082efafe6d0b5400478058c6e17a8c1b78df5e409ecb00514b748569a"} Jan 29 10:05:05 crc kubenswrapper[4771]: I0129 10:05:05.861171 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:05:05 crc kubenswrapper[4771]: I0129 10:05:05.934270 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgbdf"] Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.256477 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.432334 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-workdir\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.432426 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ca-certs\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.432459 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.432485 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config-secret\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.432599 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.433419 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-temporary\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.433461 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-config-data\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.433495 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ssh-key\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.433513 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p92vm\" (UniqueName: \"kubernetes.io/projected/be095875-5658-44fb-9c4b-90d1bc093cf3-kube-api-access-p92vm\") pod \"be095875-5658-44fb-9c4b-90d1bc093cf3\" (UID: \"be095875-5658-44fb-9c4b-90d1bc093cf3\") " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.434387 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-config-data" (OuterVolumeSpecName: "config-data") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.434426 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.436110 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.438262 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be095875-5658-44fb-9c4b-90d1bc093cf3-kube-api-access-p92vm" (OuterVolumeSpecName: "kube-api-access-p92vm") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "kube-api-access-p92vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.440149 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.462545 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.463965 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.477376 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.492003 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "be095875-5658-44fb-9c4b-90d1bc093cf3" (UID: "be095875-5658-44fb-9c4b-90d1bc093cf3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535535 4771 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535596 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535609 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535619 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535630 4771 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535639 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be095875-5658-44fb-9c4b-90d1bc093cf3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535647 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be095875-5658-44fb-9c4b-90d1bc093cf3-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535655 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p92vm\" (UniqueName: \"kubernetes.io/projected/be095875-5658-44fb-9c4b-90d1bc093cf3-kube-api-access-p92vm\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.535664 4771 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/be095875-5658-44fb-9c4b-90d1bc093cf3-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.555366 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.637666 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.835943 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.835951 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"be095875-5658-44fb-9c4b-90d1bc093cf3","Type":"ContainerDied","Data":"d94549f5bb65584eaf5e3da534bf9b43c515de134aac8a3d50253fafe03bd83c"} Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.836559 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d94549f5bb65584eaf5e3da534bf9b43c515de134aac8a3d50253fafe03bd83c" Jan 29 10:05:07 crc kubenswrapper[4771]: I0129 10:05:07.836041 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lgbdf" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerName="registry-server" containerID="cri-o://e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb" gracePeriod=2 Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.358440 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.452294 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-catalog-content\") pod \"0c41a377-0f14-453d-9130-5d88e4912ae4\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.452864 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-utilities\") pod \"0c41a377-0f14-453d-9130-5d88e4912ae4\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.452970 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdxhf\" (UniqueName: \"kubernetes.io/projected/0c41a377-0f14-453d-9130-5d88e4912ae4-kube-api-access-mdxhf\") pod \"0c41a377-0f14-453d-9130-5d88e4912ae4\" (UID: \"0c41a377-0f14-453d-9130-5d88e4912ae4\") " Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.453947 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-utilities" (OuterVolumeSpecName: "utilities") pod "0c41a377-0f14-453d-9130-5d88e4912ae4" (UID: "0c41a377-0f14-453d-9130-5d88e4912ae4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.461037 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c41a377-0f14-453d-9130-5d88e4912ae4-kube-api-access-mdxhf" (OuterVolumeSpecName: "kube-api-access-mdxhf") pod "0c41a377-0f14-453d-9130-5d88e4912ae4" (UID: "0c41a377-0f14-453d-9130-5d88e4912ae4"). InnerVolumeSpecName "kube-api-access-mdxhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.556144 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdxhf\" (UniqueName: \"kubernetes.io/projected/0c41a377-0f14-453d-9130-5d88e4912ae4-kube-api-access-mdxhf\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.556176 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.573981 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c41a377-0f14-453d-9130-5d88e4912ae4" (UID: "0c41a377-0f14-453d-9130-5d88e4912ae4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.659027 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c41a377-0f14-453d-9130-5d88e4912ae4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.847639 4771 generic.go:334] "Generic (PLEG): container finished" podID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerID="e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb" exitCode=0 Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.847812 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgbdf" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.848475 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgbdf" event={"ID":"0c41a377-0f14-453d-9130-5d88e4912ae4","Type":"ContainerDied","Data":"e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb"} Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.848519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgbdf" event={"ID":"0c41a377-0f14-453d-9130-5d88e4912ae4","Type":"ContainerDied","Data":"82d5d2763537574ed043c7b370eb55117462a441c4e885c3b69c4a7d0d209580"} Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.848539 4771 scope.go:117] "RemoveContainer" containerID="e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.871454 4771 scope.go:117] "RemoveContainer" containerID="eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.894685 4771 scope.go:117] "RemoveContainer" containerID="2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.899623 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgbdf"] Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.912179 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgbdf"] Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.940229 4771 scope.go:117] "RemoveContainer" containerID="e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb" Jan 29 10:05:08 crc kubenswrapper[4771]: E0129 10:05:08.940717 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb\": container with ID starting with e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb not found: ID does not exist" containerID="e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.940756 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb"} err="failed to get container status \"e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb\": rpc error: code = NotFound desc = could not find container \"e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb\": container with ID starting with e492d2cdec44ec9c031e41e45cdeaba041e8b85700fd05d2cf16839c51d2f7bb not found: ID does not exist" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.940786 4771 scope.go:117] "RemoveContainer" containerID="eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56" Jan 29 10:05:08 crc kubenswrapper[4771]: E0129 10:05:08.941033 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56\": container with ID starting with eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56 not found: ID does not exist" containerID="eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.941063 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56"} err="failed to get container status \"eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56\": rpc error: code = NotFound desc = could not find container \"eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56\": container with ID starting with eeaac3ef560497c7ac95de97c1c9d0727d5fb26ed76a8b51c34b5115a20dec56 not found: ID does not exist" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.941082 4771 scope.go:117] "RemoveContainer" containerID="2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c" Jan 29 10:05:08 crc kubenswrapper[4771]: E0129 10:05:08.941281 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c\": container with ID starting with 2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c not found: ID does not exist" containerID="2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c" Jan 29 10:05:08 crc kubenswrapper[4771]: I0129 10:05:08.941312 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c"} err="failed to get container status \"2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c\": rpc error: code = NotFound desc = could not find container \"2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c\": container with ID starting with 2a7b9d6d00fadbe37f109bbb482e3a123073ef39a82ade9fa78407a0c60e270c not found: ID does not exist" Jan 29 10:05:10 crc kubenswrapper[4771]: I0129 10:05:10.853778 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" path="/var/lib/kubelet/pods/0c41a377-0f14-453d-9130-5d88e4912ae4/volumes" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.264758 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 10:05:18 crc kubenswrapper[4771]: E0129 10:05:18.265781 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be095875-5658-44fb-9c4b-90d1bc093cf3" containerName="tempest-tests-tempest-tests-runner" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.265796 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="be095875-5658-44fb-9c4b-90d1bc093cf3" containerName="tempest-tests-tempest-tests-runner" Jan 29 10:05:18 crc kubenswrapper[4771]: E0129 10:05:18.265840 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerName="extract-content" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.265848 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerName="extract-content" Jan 29 10:05:18 crc kubenswrapper[4771]: E0129 10:05:18.265864 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerName="registry-server" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.265876 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerName="registry-server" Jan 29 10:05:18 crc kubenswrapper[4771]: E0129 10:05:18.265898 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerName="extract-utilities" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.265906 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerName="extract-utilities" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.266145 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c41a377-0f14-453d-9130-5d88e4912ae4" containerName="registry-server" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.266166 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="be095875-5658-44fb-9c4b-90d1bc093cf3" containerName="tempest-tests-tempest-tests-runner" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.266916 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.270129 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-trsvj" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.275820 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.460997 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"71cc9425-62dc-4336-8f57-765e49ea1b7e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.461136 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtvvf\" (UniqueName: \"kubernetes.io/projected/71cc9425-62dc-4336-8f57-765e49ea1b7e-kube-api-access-mtvvf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"71cc9425-62dc-4336-8f57-765e49ea1b7e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.563770 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"71cc9425-62dc-4336-8f57-765e49ea1b7e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.563887 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtvvf\" (UniqueName: \"kubernetes.io/projected/71cc9425-62dc-4336-8f57-765e49ea1b7e-kube-api-access-mtvvf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"71cc9425-62dc-4336-8f57-765e49ea1b7e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.565368 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"71cc9425-62dc-4336-8f57-765e49ea1b7e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.597671 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtvvf\" (UniqueName: \"kubernetes.io/projected/71cc9425-62dc-4336-8f57-765e49ea1b7e-kube-api-access-mtvvf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"71cc9425-62dc-4336-8f57-765e49ea1b7e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.624031 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"71cc9425-62dc-4336-8f57-765e49ea1b7e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:18 crc kubenswrapper[4771]: I0129 10:05:18.886118 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 10:05:19 crc kubenswrapper[4771]: I0129 10:05:19.491970 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 10:05:19 crc kubenswrapper[4771]: W0129 10:05:19.501076 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71cc9425_62dc_4336_8f57_765e49ea1b7e.slice/crio-91e925774392bd5754db30155d0504826c9e7a694e56010a92aea5965c3751a3 WatchSource:0}: Error finding container 91e925774392bd5754db30155d0504826c9e7a694e56010a92aea5965c3751a3: Status 404 returned error can't find the container with id 91e925774392bd5754db30155d0504826c9e7a694e56010a92aea5965c3751a3 Jan 29 10:05:19 crc kubenswrapper[4771]: I0129 10:05:19.972213 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"71cc9425-62dc-4336-8f57-765e49ea1b7e","Type":"ContainerStarted","Data":"91e925774392bd5754db30155d0504826c9e7a694e56010a92aea5965c3751a3"} Jan 29 10:05:20 crc kubenswrapper[4771]: I0129 10:05:20.981298 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"71cc9425-62dc-4336-8f57-765e49ea1b7e","Type":"ContainerStarted","Data":"bf44c2e0fcf8c7060d4fd1a1160ffb125195b6c14f291137ece71d7e48ac7ffb"} Jan 29 10:05:21 crc kubenswrapper[4771]: I0129 10:05:21.007495 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.123637494 podStartE2EDuration="3.007471177s" podCreationTimestamp="2026-01-29 10:05:18 +0000 UTC" firstStartedPulling="2026-01-29 10:05:19.503506454 +0000 UTC m=+3539.626346681" lastFinishedPulling="2026-01-29 10:05:20.387340127 +0000 UTC m=+3540.510180364" observedRunningTime="2026-01-29 10:05:20.999456829 +0000 UTC m=+3541.122297066" watchObservedRunningTime="2026-01-29 10:05:21.007471177 +0000 UTC m=+3541.130311424" Jan 29 10:05:41 crc kubenswrapper[4771]: I0129 10:05:41.951957 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z95ps/must-gather-97md4"] Jan 29 10:05:41 crc kubenswrapper[4771]: I0129 10:05:41.959156 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:05:41 crc kubenswrapper[4771]: I0129 10:05:41.964286 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z95ps"/"kube-root-ca.crt" Jan 29 10:05:41 crc kubenswrapper[4771]: I0129 10:05:41.964299 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z95ps"/"default-dockercfg-hs76b" Jan 29 10:05:41 crc kubenswrapper[4771]: I0129 10:05:41.964963 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z95ps"/"openshift-service-ca.crt" Jan 29 10:05:41 crc kubenswrapper[4771]: I0129 10:05:41.972238 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z95ps/must-gather-97md4"] Jan 29 10:05:42 crc kubenswrapper[4771]: I0129 10:05:42.086380 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-must-gather-output\") pod \"must-gather-97md4\" (UID: \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\") " pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:05:42 crc kubenswrapper[4771]: I0129 10:05:42.086644 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flb9d\" (UniqueName: \"kubernetes.io/projected/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-kube-api-access-flb9d\") pod \"must-gather-97md4\" (UID: \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\") " pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:05:42 crc kubenswrapper[4771]: I0129 10:05:42.188322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-must-gather-output\") pod \"must-gather-97md4\" (UID: \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\") " pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:05:42 crc kubenswrapper[4771]: I0129 10:05:42.188423 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flb9d\" (UniqueName: \"kubernetes.io/projected/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-kube-api-access-flb9d\") pod \"must-gather-97md4\" (UID: \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\") " pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:05:42 crc kubenswrapper[4771]: I0129 10:05:42.189340 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-must-gather-output\") pod \"must-gather-97md4\" (UID: \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\") " pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:05:42 crc kubenswrapper[4771]: I0129 10:05:42.209524 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flb9d\" (UniqueName: \"kubernetes.io/projected/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-kube-api-access-flb9d\") pod \"must-gather-97md4\" (UID: \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\") " pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:05:42 crc kubenswrapper[4771]: I0129 10:05:42.277223 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:05:42 crc kubenswrapper[4771]: I0129 10:05:42.774369 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z95ps/must-gather-97md4"] Jan 29 10:05:43 crc kubenswrapper[4771]: I0129 10:05:43.226175 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/must-gather-97md4" event={"ID":"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a","Type":"ContainerStarted","Data":"667d4dba07e5ac1fe85672c2ca44007cd7af1edc6642d6b67dfbbbcb295ad60e"} Jan 29 10:05:49 crc kubenswrapper[4771]: I0129 10:05:49.294390 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/must-gather-97md4" event={"ID":"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a","Type":"ContainerStarted","Data":"bba82c8488b0995ab4fb4324f4c0e1d9afa391f2974d6db7bd62123af09589a9"} Jan 29 10:05:49 crc kubenswrapper[4771]: I0129 10:05:49.295109 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/must-gather-97md4" event={"ID":"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a","Type":"ContainerStarted","Data":"18066028862e3b00a9be60be2d766093af7d7e83ef4d5e7e72ebb4a66f510459"} Jan 29 10:05:49 crc kubenswrapper[4771]: I0129 10:05:49.331473 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z95ps/must-gather-97md4" podStartSLOduration=2.65532798 podStartE2EDuration="8.331445205s" podCreationTimestamp="2026-01-29 10:05:41 +0000 UTC" firstStartedPulling="2026-01-29 10:05:42.772263038 +0000 UTC m=+3562.895103255" lastFinishedPulling="2026-01-29 10:05:48.448380253 +0000 UTC m=+3568.571220480" observedRunningTime="2026-01-29 10:05:49.319946852 +0000 UTC m=+3569.442787119" watchObservedRunningTime="2026-01-29 10:05:49.331445205 +0000 UTC m=+3569.454285472" Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.456868 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z95ps/crc-debug-8kjgw"] Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.458354 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.571105 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8lb\" (UniqueName: \"kubernetes.io/projected/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-kube-api-access-4p8lb\") pod \"crc-debug-8kjgw\" (UID: \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\") " pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.571224 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-host\") pod \"crc-debug-8kjgw\" (UID: \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\") " pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.672353 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-host\") pod \"crc-debug-8kjgw\" (UID: \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\") " pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.672556 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-host\") pod \"crc-debug-8kjgw\" (UID: \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\") " pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.672732 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8lb\" (UniqueName: \"kubernetes.io/projected/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-kube-api-access-4p8lb\") pod \"crc-debug-8kjgw\" (UID: \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\") " pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.695666 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8lb\" (UniqueName: \"kubernetes.io/projected/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-kube-api-access-4p8lb\") pod \"crc-debug-8kjgw\" (UID: \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\") " pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:05:52 crc kubenswrapper[4771]: I0129 10:05:52.774238 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:05:52 crc kubenswrapper[4771]: W0129 10:05:52.826873 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00e7688a_67ef_4352_8fc6_48bf5bbac5a2.slice/crio-01ec703390fdecbbe9889b1ae13ee523de58fa93d213fc4f327579a5d7e3a99c WatchSource:0}: Error finding container 01ec703390fdecbbe9889b1ae13ee523de58fa93d213fc4f327579a5d7e3a99c: Status 404 returned error can't find the container with id 01ec703390fdecbbe9889b1ae13ee523de58fa93d213fc4f327579a5d7e3a99c Jan 29 10:05:53 crc kubenswrapper[4771]: I0129 10:05:53.332823 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/crc-debug-8kjgw" event={"ID":"00e7688a-67ef-4352-8fc6-48bf5bbac5a2","Type":"ContainerStarted","Data":"01ec703390fdecbbe9889b1ae13ee523de58fa93d213fc4f327579a5d7e3a99c"} Jan 29 10:06:05 crc kubenswrapper[4771]: I0129 10:06:05.433332 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/crc-debug-8kjgw" event={"ID":"00e7688a-67ef-4352-8fc6-48bf5bbac5a2","Type":"ContainerStarted","Data":"cec6e8c3dcd8c29fbe760b56e3713616502490b83d7128bbe1b20822d7a399ef"} Jan 29 10:06:05 crc kubenswrapper[4771]: I0129 10:06:05.451795 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z95ps/crc-debug-8kjgw" podStartSLOduration=1.433471712 podStartE2EDuration="13.451773771s" podCreationTimestamp="2026-01-29 10:05:52 +0000 UTC" firstStartedPulling="2026-01-29 10:05:52.828895085 +0000 UTC m=+3572.951735312" lastFinishedPulling="2026-01-29 10:06:04.847197144 +0000 UTC m=+3584.970037371" observedRunningTime="2026-01-29 10:06:05.444813441 +0000 UTC m=+3585.567653678" watchObservedRunningTime="2026-01-29 10:06:05.451773771 +0000 UTC m=+3585.574613998" Jan 29 10:06:44 crc kubenswrapper[4771]: I0129 10:06:44.271037 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:06:44 crc kubenswrapper[4771]: I0129 10:06:44.271503 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:06:45 crc kubenswrapper[4771]: I0129 10:06:45.799888 4771 generic.go:334] "Generic (PLEG): container finished" podID="00e7688a-67ef-4352-8fc6-48bf5bbac5a2" containerID="cec6e8c3dcd8c29fbe760b56e3713616502490b83d7128bbe1b20822d7a399ef" exitCode=0 Jan 29 10:06:45 crc kubenswrapper[4771]: I0129 10:06:45.799930 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/crc-debug-8kjgw" event={"ID":"00e7688a-67ef-4352-8fc6-48bf5bbac5a2","Type":"ContainerDied","Data":"cec6e8c3dcd8c29fbe760b56e3713616502490b83d7128bbe1b20822d7a399ef"} Jan 29 10:06:46 crc kubenswrapper[4771]: I0129 10:06:46.939802 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:06:46 crc kubenswrapper[4771]: I0129 10:06:46.988087 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z95ps/crc-debug-8kjgw"] Jan 29 10:06:46 crc kubenswrapper[4771]: I0129 10:06:46.997962 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z95ps/crc-debug-8kjgw"] Jan 29 10:06:47 crc kubenswrapper[4771]: I0129 10:06:47.104269 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p8lb\" (UniqueName: \"kubernetes.io/projected/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-kube-api-access-4p8lb\") pod \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\" (UID: \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\") " Jan 29 10:06:47 crc kubenswrapper[4771]: I0129 10:06:47.104850 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-host\") pod \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\" (UID: \"00e7688a-67ef-4352-8fc6-48bf5bbac5a2\") " Jan 29 10:06:47 crc kubenswrapper[4771]: I0129 10:06:47.104938 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-host" (OuterVolumeSpecName: "host") pod "00e7688a-67ef-4352-8fc6-48bf5bbac5a2" (UID: "00e7688a-67ef-4352-8fc6-48bf5bbac5a2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:06:47 crc kubenswrapper[4771]: I0129 10:06:47.116596 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-kube-api-access-4p8lb" (OuterVolumeSpecName: "kube-api-access-4p8lb") pod "00e7688a-67ef-4352-8fc6-48bf5bbac5a2" (UID: "00e7688a-67ef-4352-8fc6-48bf5bbac5a2"). InnerVolumeSpecName "kube-api-access-4p8lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:06:47 crc kubenswrapper[4771]: I0129 10:06:47.132952 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-host\") on node \"crc\" DevicePath \"\"" Jan 29 10:06:47 crc kubenswrapper[4771]: I0129 10:06:47.234982 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p8lb\" (UniqueName: \"kubernetes.io/projected/00e7688a-67ef-4352-8fc6-48bf5bbac5a2-kube-api-access-4p8lb\") on node \"crc\" DevicePath \"\"" Jan 29 10:06:47 crc kubenswrapper[4771]: I0129 10:06:47.826819 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ec703390fdecbbe9889b1ae13ee523de58fa93d213fc4f327579a5d7e3a99c" Jan 29 10:06:47 crc kubenswrapper[4771]: I0129 10:06:47.827453 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-8kjgw" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.174740 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z95ps/crc-debug-5b5rt"] Jan 29 10:06:48 crc kubenswrapper[4771]: E0129 10:06:48.175207 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e7688a-67ef-4352-8fc6-48bf5bbac5a2" containerName="container-00" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.175224 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e7688a-67ef-4352-8fc6-48bf5bbac5a2" containerName="container-00" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.175457 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e7688a-67ef-4352-8fc6-48bf5bbac5a2" containerName="container-00" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.176278 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.256453 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535aad20-ec93-41bf-9447-4139c9ce21f6-host\") pod \"crc-debug-5b5rt\" (UID: \"535aad20-ec93-41bf-9447-4139c9ce21f6\") " pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.256499 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds68t\" (UniqueName: \"kubernetes.io/projected/535aad20-ec93-41bf-9447-4139c9ce21f6-kube-api-access-ds68t\") pod \"crc-debug-5b5rt\" (UID: \"535aad20-ec93-41bf-9447-4139c9ce21f6\") " pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.358766 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535aad20-ec93-41bf-9447-4139c9ce21f6-host\") pod \"crc-debug-5b5rt\" (UID: \"535aad20-ec93-41bf-9447-4139c9ce21f6\") " pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.358819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds68t\" (UniqueName: \"kubernetes.io/projected/535aad20-ec93-41bf-9447-4139c9ce21f6-kube-api-access-ds68t\") pod \"crc-debug-5b5rt\" (UID: \"535aad20-ec93-41bf-9447-4139c9ce21f6\") " pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.359019 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535aad20-ec93-41bf-9447-4139c9ce21f6-host\") pod \"crc-debug-5b5rt\" (UID: \"535aad20-ec93-41bf-9447-4139c9ce21f6\") " pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.382351 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds68t\" (UniqueName: \"kubernetes.io/projected/535aad20-ec93-41bf-9447-4139c9ce21f6-kube-api-access-ds68t\") pod \"crc-debug-5b5rt\" (UID: \"535aad20-ec93-41bf-9447-4139c9ce21f6\") " pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.502884 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.837269 4771 generic.go:334] "Generic (PLEG): container finished" podID="535aad20-ec93-41bf-9447-4139c9ce21f6" containerID="894bbc890bc21e3e616ac7da169f2bac58ef795e261c39fec2fe458c38636ba8" exitCode=0 Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.854960 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e7688a-67ef-4352-8fc6-48bf5bbac5a2" path="/var/lib/kubelet/pods/00e7688a-67ef-4352-8fc6-48bf5bbac5a2/volumes" Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.857568 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/crc-debug-5b5rt" event={"ID":"535aad20-ec93-41bf-9447-4139c9ce21f6","Type":"ContainerDied","Data":"894bbc890bc21e3e616ac7da169f2bac58ef795e261c39fec2fe458c38636ba8"} Jan 29 10:06:48 crc kubenswrapper[4771]: I0129 10:06:48.857907 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/crc-debug-5b5rt" event={"ID":"535aad20-ec93-41bf-9447-4139c9ce21f6","Type":"ContainerStarted","Data":"41129bf21707efff8f9a5aad6cbcddc4ef7ea4569eaa17a57a61419dec7dc1c0"} Jan 29 10:06:49 crc kubenswrapper[4771]: I0129 10:06:49.326016 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z95ps/crc-debug-5b5rt"] Jan 29 10:06:49 crc kubenswrapper[4771]: I0129 10:06:49.332920 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z95ps/crc-debug-5b5rt"] Jan 29 10:06:49 crc kubenswrapper[4771]: I0129 10:06:49.934589 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:49 crc kubenswrapper[4771]: I0129 10:06:49.985055 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535aad20-ec93-41bf-9447-4139c9ce21f6-host\") pod \"535aad20-ec93-41bf-9447-4139c9ce21f6\" (UID: \"535aad20-ec93-41bf-9447-4139c9ce21f6\") " Jan 29 10:06:49 crc kubenswrapper[4771]: I0129 10:06:49.985176 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/535aad20-ec93-41bf-9447-4139c9ce21f6-host" (OuterVolumeSpecName: "host") pod "535aad20-ec93-41bf-9447-4139c9ce21f6" (UID: "535aad20-ec93-41bf-9447-4139c9ce21f6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:06:49 crc kubenswrapper[4771]: I0129 10:06:49.985487 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds68t\" (UniqueName: \"kubernetes.io/projected/535aad20-ec93-41bf-9447-4139c9ce21f6-kube-api-access-ds68t\") pod \"535aad20-ec93-41bf-9447-4139c9ce21f6\" (UID: \"535aad20-ec93-41bf-9447-4139c9ce21f6\") " Jan 29 10:06:49 crc kubenswrapper[4771]: I0129 10:06:49.986263 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/535aad20-ec93-41bf-9447-4139c9ce21f6-host\") on node \"crc\" DevicePath \"\"" Jan 29 10:06:49 crc kubenswrapper[4771]: I0129 10:06:49.994243 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535aad20-ec93-41bf-9447-4139c9ce21f6-kube-api-access-ds68t" (OuterVolumeSpecName: "kube-api-access-ds68t") pod "535aad20-ec93-41bf-9447-4139c9ce21f6" (UID: "535aad20-ec93-41bf-9447-4139c9ce21f6"). InnerVolumeSpecName "kube-api-access-ds68t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.088482 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds68t\" (UniqueName: \"kubernetes.io/projected/535aad20-ec93-41bf-9447-4139c9ce21f6-kube-api-access-ds68t\") on node \"crc\" DevicePath \"\"" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.521013 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z95ps/crc-debug-zzvsj"] Jan 29 10:06:50 crc kubenswrapper[4771]: E0129 10:06:50.521771 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535aad20-ec93-41bf-9447-4139c9ce21f6" containerName="container-00" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.521793 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="535aad20-ec93-41bf-9447-4139c9ce21f6" containerName="container-00" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.522012 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="535aad20-ec93-41bf-9447-4139c9ce21f6" containerName="container-00" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.522750 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.597625 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9706a08-bb6e-478e-9ff5-2ae99eef9311-host\") pod \"crc-debug-zzvsj\" (UID: \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\") " pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.597843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b6k5\" (UniqueName: \"kubernetes.io/projected/c9706a08-bb6e-478e-9ff5-2ae99eef9311-kube-api-access-9b6k5\") pod \"crc-debug-zzvsj\" (UID: \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\") " pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.700047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9706a08-bb6e-478e-9ff5-2ae99eef9311-host\") pod \"crc-debug-zzvsj\" (UID: \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\") " pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.700212 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b6k5\" (UniqueName: \"kubernetes.io/projected/c9706a08-bb6e-478e-9ff5-2ae99eef9311-kube-api-access-9b6k5\") pod \"crc-debug-zzvsj\" (UID: \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\") " pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.700239 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9706a08-bb6e-478e-9ff5-2ae99eef9311-host\") pod \"crc-debug-zzvsj\" (UID: \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\") " pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.726560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b6k5\" (UniqueName: \"kubernetes.io/projected/c9706a08-bb6e-478e-9ff5-2ae99eef9311-kube-api-access-9b6k5\") pod \"crc-debug-zzvsj\" (UID: \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\") " pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.860474 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535aad20-ec93-41bf-9447-4139c9ce21f6" path="/var/lib/kubelet/pods/535aad20-ec93-41bf-9447-4139c9ce21f6/volumes" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.864307 4771 scope.go:117] "RemoveContainer" containerID="894bbc890bc21e3e616ac7da169f2bac58ef795e261c39fec2fe458c38636ba8" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.864490 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-5b5rt" Jan 29 10:06:50 crc kubenswrapper[4771]: I0129 10:06:50.867926 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:51 crc kubenswrapper[4771]: I0129 10:06:51.873129 4771 generic.go:334] "Generic (PLEG): container finished" podID="c9706a08-bb6e-478e-9ff5-2ae99eef9311" containerID="6d839ea9c9c9563e725176c482a7bdeb06e72cc7f02bdc6a51565dc70efb2b22" exitCode=0 Jan 29 10:06:51 crc kubenswrapper[4771]: I0129 10:06:51.873220 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/crc-debug-zzvsj" event={"ID":"c9706a08-bb6e-478e-9ff5-2ae99eef9311","Type":"ContainerDied","Data":"6d839ea9c9c9563e725176c482a7bdeb06e72cc7f02bdc6a51565dc70efb2b22"} Jan 29 10:06:51 crc kubenswrapper[4771]: I0129 10:06:51.873486 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/crc-debug-zzvsj" event={"ID":"c9706a08-bb6e-478e-9ff5-2ae99eef9311","Type":"ContainerStarted","Data":"8e7ffb69b3e835e98eb7595993669567393e4fc1b1bc587a7546fc95e8ff2b55"} Jan 29 10:06:51 crc kubenswrapper[4771]: I0129 10:06:51.911338 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z95ps/crc-debug-zzvsj"] Jan 29 10:06:51 crc kubenswrapper[4771]: I0129 10:06:51.919317 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z95ps/crc-debug-zzvsj"] Jan 29 10:06:52 crc kubenswrapper[4771]: I0129 10:06:52.991430 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:53 crc kubenswrapper[4771]: I0129 10:06:53.044511 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9706a08-bb6e-478e-9ff5-2ae99eef9311-host\") pod \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\" (UID: \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\") " Jan 29 10:06:53 crc kubenswrapper[4771]: I0129 10:06:53.044625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9706a08-bb6e-478e-9ff5-2ae99eef9311-host" (OuterVolumeSpecName: "host") pod "c9706a08-bb6e-478e-9ff5-2ae99eef9311" (UID: "c9706a08-bb6e-478e-9ff5-2ae99eef9311"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:06:53 crc kubenswrapper[4771]: I0129 10:06:53.044685 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b6k5\" (UniqueName: \"kubernetes.io/projected/c9706a08-bb6e-478e-9ff5-2ae99eef9311-kube-api-access-9b6k5\") pod \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\" (UID: \"c9706a08-bb6e-478e-9ff5-2ae99eef9311\") " Jan 29 10:06:53 crc kubenswrapper[4771]: I0129 10:06:53.045174 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9706a08-bb6e-478e-9ff5-2ae99eef9311-host\") on node \"crc\" DevicePath \"\"" Jan 29 10:06:53 crc kubenswrapper[4771]: I0129 10:06:53.050854 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9706a08-bb6e-478e-9ff5-2ae99eef9311-kube-api-access-9b6k5" (OuterVolumeSpecName: "kube-api-access-9b6k5") pod "c9706a08-bb6e-478e-9ff5-2ae99eef9311" (UID: "c9706a08-bb6e-478e-9ff5-2ae99eef9311"). InnerVolumeSpecName "kube-api-access-9b6k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:06:53 crc kubenswrapper[4771]: I0129 10:06:53.147382 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b6k5\" (UniqueName: \"kubernetes.io/projected/c9706a08-bb6e-478e-9ff5-2ae99eef9311-kube-api-access-9b6k5\") on node \"crc\" DevicePath \"\"" Jan 29 10:06:53 crc kubenswrapper[4771]: I0129 10:06:53.894666 4771 scope.go:117] "RemoveContainer" containerID="6d839ea9c9c9563e725176c482a7bdeb06e72cc7f02bdc6a51565dc70efb2b22" Jan 29 10:06:53 crc kubenswrapper[4771]: I0129 10:06:53.894750 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/crc-debug-zzvsj" Jan 29 10:06:54 crc kubenswrapper[4771]: I0129 10:06:54.850662 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9706a08-bb6e-478e-9ff5-2ae99eef9311" path="/var/lib/kubelet/pods/c9706a08-bb6e-478e-9ff5-2ae99eef9311/volumes" Jan 29 10:07:06 crc kubenswrapper[4771]: I0129 10:07:06.974950 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7484874686-s4fjd_25d2516d-24c8-400e-acd4-d35b384046bb/barbican-api/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.134054 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7484874686-s4fjd_25d2516d-24c8-400e-acd4-d35b384046bb/barbican-api-log/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.264025 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c444788cd-vwtrs_49846e1c-6efb-4d4f-875c-ab051d11de09/barbican-keystone-listener/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.278465 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c444788cd-vwtrs_49846e1c-6efb-4d4f-875c-ab051d11de09/barbican-keystone-listener-log/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.454420 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5575df4f89-jlhn2_bef7ac33-b62c-4372-a3f2-98b951265ef3/barbican-worker/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.474973 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5575df4f89-jlhn2_bef7ac33-b62c-4372-a3f2-98b951265ef3/barbican-worker-log/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.644015 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq_d2af364a-dc24-46dc-bd14-8ad420af1812/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.700083 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/ceilometer-central-agent/1.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.707203 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/ceilometer-central-agent/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.841090 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/proxy-httpd/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.855765 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/ceilometer-notification-agent/0.log" Jan 29 10:07:07 crc kubenswrapper[4771]: I0129 10:07:07.863851 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/sg-core/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.030190 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c483fbc3-2b55-4c4e-bb34-600f2fe18bd2/cinder-api-log/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.080339 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c483fbc3-2b55-4c4e-bb34-600f2fe18bd2/cinder-api/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.186526 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0c71ced2-21b6-42f9-bcf8-1d844b6402ab/cinder-scheduler/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.246106 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0c71ced2-21b6-42f9-bcf8-1d844b6402ab/probe/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.351415 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s_620aab50-8510-4a95-a53c-7dd7fac714b6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.437199 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs_880c583d-29a4-44e0-83b0-16795d5eac98/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.564481 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-578c4b6ff9-9qfgr_b23b9082-b814-455c-a31b-8df578081bf4/init/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.813332 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-578c4b6ff9-9qfgr_b23b9082-b814-455c-a31b-8df578081bf4/init/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.819294 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-578c4b6ff9-9qfgr_b23b9082-b814-455c-a31b-8df578081bf4/dnsmasq-dns/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.833951 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl_8c4903c9-2b1b-4e50-9e27-49c4aa41974c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:08 crc kubenswrapper[4771]: I0129 10:07:08.995532 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_afe8ac11-63a3-4c6f-b46b-a8a79ba8e027/glance-httpd/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.019839 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_afe8ac11-63a3-4c6f-b46b-a8a79ba8e027/glance-log/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.176473 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d8b13b11-fc7b-4228-b762-27c0ae94ae33/glance-httpd/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.232114 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d8b13b11-fc7b-4228-b762-27c0ae94ae33/glance-log/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.390010 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67d9579b5b-l9trm_3d093a30-424c-4a0c-a749-7a47328c4b2d/horizon/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.573606 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d_5fc93c8e-ca3e-403c-b42e-fea90628e728/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.690351 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zb8mb_72d4ad1e-f80c-43d6-a515-3f08c23df279/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.699641 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67d9579b5b-l9trm_3d093a30-424c-4a0c-a749-7a47328c4b2d/horizon-log/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.913086 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494681-2s5hx_c0370c10-53ad-4d77-8869-f5c727a41d8c/keystone-cron/0.log" Jan 29 10:07:09 crc kubenswrapper[4771]: I0129 10:07:09.948166 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-67ddc4bf8b-n46xf_f202e04b-5581-45cc-9b76-da029ee47b31/keystone-api/0.log" Jan 29 10:07:10 crc kubenswrapper[4771]: I0129 10:07:10.274090 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cd7da89d-e16f-404a-b1dd-ccaaa3069431/kube-state-metrics/0.log" Jan 29 10:07:10 crc kubenswrapper[4771]: I0129 10:07:10.351632 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tw2th_8cb7a0bd-4a49-4b9c-ae51-86219526db00/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:10 crc kubenswrapper[4771]: I0129 10:07:10.677762 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78b5bf9d6f-2454r_814be124-8d28-4fa9-b792-9d6561d105f9/neutron-api/0.log" Jan 29 10:07:10 crc kubenswrapper[4771]: I0129 10:07:10.735056 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78b5bf9d6f-2454r_814be124-8d28-4fa9-b792-9d6561d105f9/neutron-httpd/0.log" Jan 29 10:07:10 crc kubenswrapper[4771]: I0129 10:07:10.906358 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs_262d9611-9da4-4ea4-82ab-abbcaab91a0d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:11 crc kubenswrapper[4771]: I0129 10:07:11.382468 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2dddef1c-bcb1-48f7-816d-d24276dd7571/nova-cell0-conductor-conductor/0.log" Jan 29 10:07:11 crc kubenswrapper[4771]: I0129 10:07:11.498579 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_460724f7-49b9-475d-a983-5ffa6998815d/nova-api-log/0.log" Jan 29 10:07:11 crc kubenswrapper[4771]: I0129 10:07:11.613240 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_03fe6bc4-290b-46aa-b934-4d6849586b41/nova-cell1-conductor-conductor/0.log" Jan 29 10:07:11 crc kubenswrapper[4771]: I0129 10:07:11.644122 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_460724f7-49b9-475d-a983-5ffa6998815d/nova-api-api/0.log" Jan 29 10:07:11 crc kubenswrapper[4771]: I0129 10:07:11.751190 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_38681ed7-61ea-42cb-b2bf-8deba8d236bf/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 10:07:11 crc kubenswrapper[4771]: I0129 10:07:11.872047 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-8wnhx_54a83150-21fe-4085-ad4c-5eb77724684a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:12 crc kubenswrapper[4771]: I0129 10:07:12.039844 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_61ef1c72-f256-4e8a-ad21-b4cae84753e5/nova-metadata-log/0.log" Jan 29 10:07:12 crc kubenswrapper[4771]: I0129 10:07:12.338455 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ff06b9bb-31fc-437f-96fb-6ab586bb9918/mysql-bootstrap/0.log" Jan 29 10:07:12 crc kubenswrapper[4771]: I0129 10:07:12.349469 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d24a45bb-85f8-42c2-bee5-0b5407bdc52e/nova-scheduler-scheduler/0.log" Jan 29 10:07:12 crc kubenswrapper[4771]: I0129 10:07:12.523990 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ff06b9bb-31fc-437f-96fb-6ab586bb9918/mysql-bootstrap/0.log" Jan 29 10:07:12 crc kubenswrapper[4771]: I0129 10:07:12.622958 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ff06b9bb-31fc-437f-96fb-6ab586bb9918/galera/0.log" Jan 29 10:07:12 crc kubenswrapper[4771]: I0129 10:07:12.734950 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd70aa50-2651-4840-a551-44a608ccb08b/mysql-bootstrap/0.log" Jan 29 10:07:12 crc kubenswrapper[4771]: I0129 10:07:12.937559 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd70aa50-2651-4840-a551-44a608ccb08b/galera/0.log" Jan 29 10:07:12 crc kubenswrapper[4771]: I0129 10:07:12.965685 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd70aa50-2651-4840-a551-44a608ccb08b/mysql-bootstrap/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.123752 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0605e923-8ce6-4789-89f7-214d47422865/openstackclient/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.214256 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hqvn7_e1390576-f674-420d-93a7-2bee6d52f9f0/ovn-controller/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.374893 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_61ef1c72-f256-4e8a-ad21-b4cae84753e5/nova-metadata-metadata/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.386270 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q7x7j_3bde3888-b70c-434c-b553-da79ce5ff68d/openstack-network-exporter/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.581989 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sjg8v_2483b251-476f-45b5-a46e-60f4dfe1024f/ovsdb-server-init/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.724123 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sjg8v_2483b251-476f-45b5-a46e-60f4dfe1024f/ovs-vswitchd/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.738070 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sjg8v_2483b251-476f-45b5-a46e-60f4dfe1024f/ovsdb-server/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.781464 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sjg8v_2483b251-476f-45b5-a46e-60f4dfe1024f/ovsdb-server-init/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.934005 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-99hxx_b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:13 crc kubenswrapper[4771]: I0129 10:07:13.977073 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3656052-f3d0-4665-9fc7-8236cede743b/openstack-network-exporter/0.log" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.069385 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3656052-f3d0-4665-9fc7-8236cede743b/ovn-northd/0.log" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.196616 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_34887d57-0fb9-4617-b9d0-1338663bd16b/openstack-network-exporter/0.log" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.245153 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_34887d57-0fb9-4617-b9d0-1338663bd16b/ovsdbserver-nb/0.log" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.279382 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.280312 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.327595 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d2c641f1-e0cc-4892-8e36-9a70ee2bacc9/openstack-network-exporter/0.log" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.460454 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d2c641f1-e0cc-4892-8e36-9a70ee2bacc9/ovsdbserver-sb/0.log" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.634297 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b66bb6fb-89w2j_93c318ae-5098-47e2-a09d-e67fe5124ed5/placement-api/0.log" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.654254 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b66bb6fb-89w2j_93c318ae-5098-47e2-a09d-e67fe5124ed5/placement-log/0.log" Jan 29 10:07:14 crc kubenswrapper[4771]: I0129 10:07:14.676764 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23244fea-bb17-4ba0-b353-d4f98af3d93d/setup-container/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.014505 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23244fea-bb17-4ba0-b353-d4f98af3d93d/setup-container/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.019199 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23244fea-bb17-4ba0-b353-d4f98af3d93d/rabbitmq/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.089431 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_222f1966-eb07-4bcb-986d-70287a36fc90/setup-container/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.308221 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_222f1966-eb07-4bcb-986d-70287a36fc90/setup-container/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.360584 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_222f1966-eb07-4bcb-986d-70287a36fc90/rabbitmq/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.376025 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs_665612c6-6f64-4e6e-a9d7-770665c7abff/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.573563 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4hnbg_1861fab3-9c27-41e5-b792-6df4cb346a1d/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.625432 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6_11b71af0-5437-4b20-a2a0-68897b1f8c78/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.802297 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dzfr4_183736e8-0ae6-459f-9dbc-1b5a9d60539d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:15 crc kubenswrapper[4771]: I0129 10:07:15.823882 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mb82t_b4199724-f14d-423d-82f1-8a1438e624fb/ssh-known-hosts-edpm-deployment/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.080735 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55bc4c6647-vmgxk_70ca5b45-1804-4830-8ede-b28279d8d4ce/proxy-server/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.144774 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55bc4c6647-vmgxk_70ca5b45-1804-4830-8ede-b28279d8d4ce/proxy-httpd/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.249900 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-snzfb_543a7e6c-ab47-4720-b5f0-6b0800904d36/swift-ring-rebalance/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.291099 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/account-auditor/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.349882 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/account-reaper/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.476250 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/container-auditor/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.544669 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/account-replicator/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.547369 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/account-server/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.617201 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/container-replicator/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.702404 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/container-server/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.763338 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-auditor/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.814416 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/container-updater/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.829121 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-expirer/0.log" Jan 29 10:07:16 crc kubenswrapper[4771]: I0129 10:07:16.923873 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-replicator/0.log" Jan 29 10:07:17 crc kubenswrapper[4771]: I0129 10:07:17.027404 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-updater/0.log" Jan 29 10:07:17 crc kubenswrapper[4771]: I0129 10:07:17.029733 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/rsync/0.log" Jan 29 10:07:17 crc kubenswrapper[4771]: I0129 10:07:17.034772 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-server/0.log" Jan 29 10:07:17 crc kubenswrapper[4771]: I0129 10:07:17.142857 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/swift-recon-cron/0.log" Jan 29 10:07:17 crc kubenswrapper[4771]: I0129 10:07:17.315828 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6_10f62904-f030-4614-accd-3c95e39c2c6a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:17 crc kubenswrapper[4771]: I0129 10:07:17.440991 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_be095875-5658-44fb-9c4b-90d1bc093cf3/tempest-tests-tempest-tests-runner/0.log" Jan 29 10:07:17 crc kubenswrapper[4771]: I0129 10:07:17.487014 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_71cc9425-62dc-4336-8f57-765e49ea1b7e/test-operator-logs-container/0.log" Jan 29 10:07:17 crc kubenswrapper[4771]: I0129 10:07:17.714416 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bssfw_8c8a65a0-1d3a-413d-964f-71d69bb1c3b7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:07:26 crc kubenswrapper[4771]: I0129 10:07:26.960519 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_71cb4a34-0373-453e-b103-3e6e0a00ff0c/memcached/0.log" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.697005 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jfsd"] Jan 29 10:07:42 crc kubenswrapper[4771]: E0129 10:07:42.698084 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9706a08-bb6e-478e-9ff5-2ae99eef9311" containerName="container-00" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.698102 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9706a08-bb6e-478e-9ff5-2ae99eef9311" containerName="container-00" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.698390 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9706a08-bb6e-478e-9ff5-2ae99eef9311" containerName="container-00" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.700894 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.721238 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jfsd"] Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.858908 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ckrl\" (UniqueName: \"kubernetes.io/projected/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-kube-api-access-9ckrl\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.859220 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-catalog-content\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.859358 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-utilities\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.960643 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-utilities\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.960969 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ckrl\" (UniqueName: \"kubernetes.io/projected/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-kube-api-access-9ckrl\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.961098 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-catalog-content\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.961300 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-utilities\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.961503 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-catalog-content\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:42 crc kubenswrapper[4771]: I0129 10:07:42.986808 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ckrl\" (UniqueName: \"kubernetes.io/projected/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-kube-api-access-9ckrl\") pod \"certified-operators-8jfsd\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:43 crc kubenswrapper[4771]: I0129 10:07:43.022065 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:43 crc kubenswrapper[4771]: I0129 10:07:43.552443 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jfsd"] Jan 29 10:07:44 crc kubenswrapper[4771]: I0129 10:07:44.271332 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:07:44 crc kubenswrapper[4771]: I0129 10:07:44.271380 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:07:44 crc kubenswrapper[4771]: I0129 10:07:44.271416 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 10:07:44 crc kubenswrapper[4771]: I0129 10:07:44.272163 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 10:07:44 crc kubenswrapper[4771]: I0129 10:07:44.272225 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" gracePeriod=600 Jan 29 10:07:44 crc kubenswrapper[4771]: I0129 10:07:44.414529 4771 generic.go:334] "Generic (PLEG): container finished" podID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerID="cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff" exitCode=0 Jan 29 10:07:44 crc kubenswrapper[4771]: I0129 10:07:44.414592 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfsd" event={"ID":"dbd739f7-3d9e-472a-be41-88d8d5fd6b33","Type":"ContainerDied","Data":"cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff"} Jan 29 10:07:44 crc kubenswrapper[4771]: I0129 10:07:44.414631 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfsd" event={"ID":"dbd739f7-3d9e-472a-be41-88d8d5fd6b33","Type":"ContainerStarted","Data":"7f3c8dd4afb769422271df5c05a680eab8f123a8c98a8f6e288cbbe1f068e092"} Jan 29 10:07:44 crc kubenswrapper[4771]: E0129 10:07:44.916744 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.195254 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/util/0.log" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.370766 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/pull/0.log" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.408470 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/pull/0.log" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.424635 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" exitCode=0 Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.424668 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba"} Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.424793 4771 scope.go:117] "RemoveContainer" containerID="d5f3cde7556a6783366568c5944ce7a8a6bb28d47d7f31caf918af623e2be39a" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.425424 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:07:45 crc kubenswrapper[4771]: E0129 10:07:45.425799 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.426763 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfsd" event={"ID":"dbd739f7-3d9e-472a-be41-88d8d5fd6b33","Type":"ContainerStarted","Data":"380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1"} Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.428498 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/util/0.log" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.620169 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/extract/0.log" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.620515 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/util/0.log" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.646609 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/pull/0.log" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.857629 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-w5l7q_6221aa48-bc7d-4a2f-9897-41dae47815e7/manager/0.log" Jan 29 10:07:45 crc kubenswrapper[4771]: I0129 10:07:45.878330 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-t96kk_742db07e-b8fa-472a-824c-ce57c4e3bca5/manager/0.log" Jan 29 10:07:46 crc kubenswrapper[4771]: I0129 10:07:46.039992 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-kzwjr_b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e/manager/0.log" Jan 29 10:07:46 crc kubenswrapper[4771]: I0129 10:07:46.215681 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-59xvx_29710697-a286-413b-a7ce-01631b4cc6de/manager/0.log" Jan 29 10:07:46 crc kubenswrapper[4771]: I0129 10:07:46.229470 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-2cq8m_e0d87d52-0e91-4f3f-bbf2-228b57bbcff7/manager/0.log" Jan 29 10:07:46 crc kubenswrapper[4771]: I0129 10:07:46.325252 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-xczpv_7f152534-d323-4bdc-9d0e-86e673b65a56/manager/0.log" Jan 29 10:07:46 crc kubenswrapper[4771]: I0129 10:07:46.548277 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-rjn7t_f9b6f2b9-26dd-44f5-859d-f9a1828d726d/manager/0.log" Jan 29 10:07:46 crc kubenswrapper[4771]: I0129 10:07:46.741899 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-ltjpd_be5b01ce-6d7f-40e9-9e6e-3291fab1d242/manager/0.log" Jan 29 10:07:46 crc kubenswrapper[4771]: I0129 10:07:46.811636 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-wfjjb_8373cf12-3567-409b-ae85-1f530e91c86a/manager/0.log" Jan 29 10:07:46 crc kubenswrapper[4771]: I0129 10:07:46.925441 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-c298s_da6fb9cf-4fe9-41a8-a645-0d98a36e9472/manager/0.log" Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.166256 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-qpqh8_013a2529-271b-4c1d-8ac4-3b443a9d1069/manager/0.log" Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.182576 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-j2vm7_a9c9f6d2-b488-4184-b7dd-46e228737c64/manager/0.log" Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.322635 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-j8578_bbfc6317-5079-4f9f-83e0-9f93970a0710/manager/0.log" Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.422631 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-q74t2_9ae26fe7-fcd5-4006-aa5d-133b8b91e521/manager/0.log" Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.452137 4771 generic.go:334] "Generic (PLEG): container finished" podID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerID="380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1" exitCode=0 Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.452177 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfsd" event={"ID":"dbd739f7-3d9e-472a-be41-88d8d5fd6b33","Type":"ContainerDied","Data":"380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1"} Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.534243 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k_f5c86f6b-dae6-4551-9413-df4e429c0ffa/manager/0.log" Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.719215 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5f8b9f866c-gltlv_ca41b7ae-086c-41c0-b397-3239655e4d1d/operator/0.log" Jan 29 10:07:47 crc kubenswrapper[4771]: I0129 10:07:47.918265 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xs5n2_4caf16d8-dac6-4281-96a5-97e82d2a828f/registry-server/0.log" Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.225886 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-w9c7v_c594b46c-4d8f-4604-a70d-91544ff13805/manager/0.log" Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.236159 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-mkl7v_488821bb-04ee-4c62-b4a3-ac83d74a8919/manager/0.log" Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.463281 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfsd" event={"ID":"dbd739f7-3d9e-472a-be41-88d8d5fd6b33","Type":"ContainerStarted","Data":"3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f"} Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.492342 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jfsd" podStartSLOduration=3.007940407 podStartE2EDuration="6.492317321s" podCreationTimestamp="2026-01-29 10:07:42 +0000 UTC" firstStartedPulling="2026-01-29 10:07:44.417577707 +0000 UTC m=+3684.540417984" lastFinishedPulling="2026-01-29 10:07:47.901954671 +0000 UTC m=+3688.024794898" observedRunningTime="2026-01-29 10:07:48.482986797 +0000 UTC m=+3688.605827024" watchObservedRunningTime="2026-01-29 10:07:48.492317321 +0000 UTC m=+3688.615157548" Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.524386 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9gz6m_40f4ff01-59ff-4cb1-a683-6e1da9756691/operator/0.log" Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.766842 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-p7tj6_688a9f5e-ab0d-4975-a033-a8cdf403fd9e/manager/0.log" Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.880916 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-46m5f_5c0452ac-093e-45b1-825f-3ba01ed93425/manager/0.log" Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.881092 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cf4dc6f96-vhpg8_72d8430d-b468-4e7b-a568-bb12c9a4c856/manager/0.log" Jan 29 10:07:48 crc kubenswrapper[4771]: I0129 10:07:48.970432 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pkgdd_c7e92467-0347-42d4-9628-639368c69b80/manager/0.log" Jan 29 10:07:49 crc kubenswrapper[4771]: I0129 10:07:49.096535 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-66qbk_5048415a-36d8-47a9-aed1-f7395e309ce3/manager/0.log" Jan 29 10:07:53 crc kubenswrapper[4771]: I0129 10:07:53.022678 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:53 crc kubenswrapper[4771]: I0129 10:07:53.022944 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:53 crc kubenswrapper[4771]: I0129 10:07:53.078269 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:53 crc kubenswrapper[4771]: I0129 10:07:53.544825 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:53 crc kubenswrapper[4771]: I0129 10:07:53.591762 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jfsd"] Jan 29 10:07:55 crc kubenswrapper[4771]: I0129 10:07:55.513221 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jfsd" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerName="registry-server" containerID="cri-o://3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f" gracePeriod=2 Jan 29 10:07:55 crc kubenswrapper[4771]: I0129 10:07:55.993250 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.112689 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ckrl\" (UniqueName: \"kubernetes.io/projected/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-kube-api-access-9ckrl\") pod \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.112870 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-catalog-content\") pod \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.113110 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-utilities\") pod \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\" (UID: \"dbd739f7-3d9e-472a-be41-88d8d5fd6b33\") " Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.113970 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-utilities" (OuterVolumeSpecName: "utilities") pod "dbd739f7-3d9e-472a-be41-88d8d5fd6b33" (UID: "dbd739f7-3d9e-472a-be41-88d8d5fd6b33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.119024 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-kube-api-access-9ckrl" (OuterVolumeSpecName: "kube-api-access-9ckrl") pod "dbd739f7-3d9e-472a-be41-88d8d5fd6b33" (UID: "dbd739f7-3d9e-472a-be41-88d8d5fd6b33"). InnerVolumeSpecName "kube-api-access-9ckrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.160231 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbd739f7-3d9e-472a-be41-88d8d5fd6b33" (UID: "dbd739f7-3d9e-472a-be41-88d8d5fd6b33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.215163 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.215201 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.215213 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ckrl\" (UniqueName: \"kubernetes.io/projected/dbd739f7-3d9e-472a-be41-88d8d5fd6b33-kube-api-access-9ckrl\") on node \"crc\" DevicePath \"\"" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.522357 4771 generic.go:334] "Generic (PLEG): container finished" podID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerID="3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f" exitCode=0 Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.522424 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfsd" event={"ID":"dbd739f7-3d9e-472a-be41-88d8d5fd6b33","Type":"ContainerDied","Data":"3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f"} Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.522465 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfsd" event={"ID":"dbd739f7-3d9e-472a-be41-88d8d5fd6b33","Type":"ContainerDied","Data":"7f3c8dd4afb769422271df5c05a680eab8f123a8c98a8f6e288cbbe1f068e092"} Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.522470 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jfsd" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.522486 4771 scope.go:117] "RemoveContainer" containerID="3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.539295 4771 scope.go:117] "RemoveContainer" containerID="380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.558840 4771 scope.go:117] "RemoveContainer" containerID="cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.575437 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jfsd"] Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.586464 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jfsd"] Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.614624 4771 scope.go:117] "RemoveContainer" containerID="3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f" Jan 29 10:07:56 crc kubenswrapper[4771]: E0129 10:07:56.615037 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f\": container with ID starting with 3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f not found: ID does not exist" containerID="3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.615069 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f"} err="failed to get container status \"3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f\": rpc error: code = NotFound desc = could not find container \"3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f\": container with ID starting with 3e23c08a9f3f8f6f389b048fab319d594788a447cf722c27060aed80ee9c3a4f not found: ID does not exist" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.615095 4771 scope.go:117] "RemoveContainer" containerID="380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1" Jan 29 10:07:56 crc kubenswrapper[4771]: E0129 10:07:56.615315 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1\": container with ID starting with 380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1 not found: ID does not exist" containerID="380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.615370 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1"} err="failed to get container status \"380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1\": rpc error: code = NotFound desc = could not find container \"380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1\": container with ID starting with 380abdfa42a07c2a3371a9f11dc19178a0b053d3be8695dc12bde71b156ee9d1 not found: ID does not exist" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.615392 4771 scope.go:117] "RemoveContainer" containerID="cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff" Jan 29 10:07:56 crc kubenswrapper[4771]: E0129 10:07:56.615608 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff\": container with ID starting with cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff not found: ID does not exist" containerID="cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.615644 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff"} err="failed to get container status \"cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff\": rpc error: code = NotFound desc = could not find container \"cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff\": container with ID starting with cbae9ae3711c8f6db5f1c4cbf6622e4b18e09f9ca1f5fe3ed6c8c598dba4c2ff not found: ID does not exist" Jan 29 10:07:56 crc kubenswrapper[4771]: I0129 10:07:56.856153 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" path="/var/lib/kubelet/pods/dbd739f7-3d9e-472a-be41-88d8d5fd6b33/volumes" Jan 29 10:07:57 crc kubenswrapper[4771]: I0129 10:07:57.839034 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:07:57 crc kubenswrapper[4771]: E0129 10:07:57.839370 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:08:07 crc kubenswrapper[4771]: I0129 10:08:07.726250 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qvr68_34f0263f-c771-4ef0-91be-9d37f9ba6d60/control-plane-machine-set-operator/0.log" Jan 29 10:08:07 crc kubenswrapper[4771]: I0129 10:08:07.884875 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rsp6c_acd89578-60c3-4368-9b2c-59dc899d1a08/kube-rbac-proxy/0.log" Jan 29 10:08:07 crc kubenswrapper[4771]: I0129 10:08:07.919417 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rsp6c_acd89578-60c3-4368-9b2c-59dc899d1a08/machine-api-operator/0.log" Jan 29 10:08:09 crc kubenswrapper[4771]: I0129 10:08:09.838426 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:08:09 crc kubenswrapper[4771]: E0129 10:08:09.838904 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:08:21 crc kubenswrapper[4771]: I0129 10:08:21.238379 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-dds85_0eb51574-328d-4156-aa8d-50355bb9d9c2/cert-manager-controller/0.log" Jan 29 10:08:21 crc kubenswrapper[4771]: I0129 10:08:21.361163 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7r6xt_b8b4db5a-9eaa-4640-8031-185eede7de9b/cert-manager-cainjector/0.log" Jan 29 10:08:21 crc kubenswrapper[4771]: I0129 10:08:21.463499 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-knhgs_fd41faef-aa84-4754-8dc1-36aeafc1e4c3/cert-manager-webhook/0.log" Jan 29 10:08:22 crc kubenswrapper[4771]: I0129 10:08:22.837727 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:08:22 crc kubenswrapper[4771]: E0129 10:08:22.838314 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:08:33 crc kubenswrapper[4771]: I0129 10:08:33.838515 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:08:33 crc kubenswrapper[4771]: E0129 10:08:33.839357 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:08:34 crc kubenswrapper[4771]: I0129 10:08:34.537954 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-cv75t_f3164bb7-413d-4fc4-b166-62ea6f7840cd/nmstate-console-plugin/0.log" Jan 29 10:08:34 crc kubenswrapper[4771]: I0129 10:08:34.742646 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ht6nn_a3334432-223c-4661-b12b-ec8524c6439d/nmstate-handler/0.log" Jan 29 10:08:34 crc kubenswrapper[4771]: I0129 10:08:34.812844 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dcp48_1755a15c-b178-498c-a5ca-077feb480beb/kube-rbac-proxy/0.log" Jan 29 10:08:34 crc kubenswrapper[4771]: I0129 10:08:34.857436 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dcp48_1755a15c-b178-498c-a5ca-077feb480beb/nmstate-metrics/0.log" Jan 29 10:08:34 crc kubenswrapper[4771]: I0129 10:08:34.989053 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-zq5mf_6a2f99d9-e297-4f09-8afd-3ab95322be73/nmstate-operator/0.log" Jan 29 10:08:35 crc kubenswrapper[4771]: I0129 10:08:35.088883 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-dgdkp_8b851beb-4218-4b45-8f3a-695b6c6cd02f/nmstate-webhook/0.log" Jan 29 10:08:48 crc kubenswrapper[4771]: I0129 10:08:48.838407 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:08:48 crc kubenswrapper[4771]: E0129 10:08:48.839118 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:09:00 crc kubenswrapper[4771]: I0129 10:09:00.849287 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:09:00 crc kubenswrapper[4771]: E0129 10:09:00.853482 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:09:04 crc kubenswrapper[4771]: I0129 10:09:04.314954 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mgllh_a1bccade-3951-4e67-9078-70e904be5b4c/kube-rbac-proxy/0.log" Jan 29 10:09:04 crc kubenswrapper[4771]: I0129 10:09:04.485531 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mgllh_a1bccade-3951-4e67-9078-70e904be5b4c/controller/0.log" Jan 29 10:09:04 crc kubenswrapper[4771]: I0129 10:09:04.571237 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-frr-files/0.log" Jan 29 10:09:04 crc kubenswrapper[4771]: I0129 10:09:04.757098 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-reloader/0.log" Jan 29 10:09:04 crc kubenswrapper[4771]: I0129 10:09:04.760988 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-frr-files/0.log" Jan 29 10:09:04 crc kubenswrapper[4771]: I0129 10:09:04.818082 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-reloader/0.log" Jan 29 10:09:04 crc kubenswrapper[4771]: I0129 10:09:04.825446 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-metrics/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.005138 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-frr-files/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.019011 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-reloader/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.021656 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-metrics/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.080057 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-metrics/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.230320 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-frr-files/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.237430 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-metrics/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.288495 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/controller/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.347661 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-reloader/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.462165 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/frr-metrics/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.486100 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/kube-rbac-proxy/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.572337 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/kube-rbac-proxy-frr/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.665196 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/reloader/0.log" Jan 29 10:09:05 crc kubenswrapper[4771]: I0129 10:09:05.779161 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6sfz7_5c791596-99d9-4d8f-ba02-c4b866a007a4/frr-k8s-webhook-server/0.log" Jan 29 10:09:06 crc kubenswrapper[4771]: I0129 10:09:06.100683 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-86b88966b-ts5vb_a0a8dfb7-3f50-4649-ade4-04de19016aaf/manager/0.log" Jan 29 10:09:06 crc kubenswrapper[4771]: I0129 10:09:06.184120 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-88c44cd79-5zvsz_760f2b5f-d6d9-4bae-bb03-02c91232b71d/webhook-server/0.log" Jan 29 10:09:06 crc kubenswrapper[4771]: I0129 10:09:06.431649 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-44br2_05c9b0d5-8464-4769-bb43-685213c34f16/kube-rbac-proxy/0.log" Jan 29 10:09:06 crc kubenswrapper[4771]: I0129 10:09:06.802656 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-44br2_05c9b0d5-8464-4769-bb43-685213c34f16/speaker/0.log" Jan 29 10:09:06 crc kubenswrapper[4771]: I0129 10:09:06.905350 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/frr/0.log" Jan 29 10:09:14 crc kubenswrapper[4771]: I0129 10:09:14.838022 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:09:14 crc kubenswrapper[4771]: E0129 10:09:14.838721 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:09:20 crc kubenswrapper[4771]: I0129 10:09:20.194950 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/util/0.log" Jan 29 10:09:20 crc kubenswrapper[4771]: I0129 10:09:20.625451 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/pull/0.log" Jan 29 10:09:20 crc kubenswrapper[4771]: I0129 10:09:20.627980 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/pull/0.log" Jan 29 10:09:20 crc kubenswrapper[4771]: I0129 10:09:20.628485 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/util/0.log" Jan 29 10:09:20 crc kubenswrapper[4771]: I0129 10:09:20.835498 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/util/0.log" Jan 29 10:09:20 crc kubenswrapper[4771]: I0129 10:09:20.864061 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/pull/0.log" Jan 29 10:09:20 crc kubenswrapper[4771]: I0129 10:09:20.886428 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/extract/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.046935 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/util/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.222484 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/util/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.247118 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/pull/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.257925 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/pull/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.436592 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/pull/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.441332 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/util/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.500805 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/extract/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.622542 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-utilities/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.732552 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-utilities/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.774483 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-content/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.779034 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-content/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.990299 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-utilities/0.log" Jan 29 10:09:21 crc kubenswrapper[4771]: I0129 10:09:21.999570 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-content/0.log" Jan 29 10:09:22 crc kubenswrapper[4771]: I0129 10:09:22.215851 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/registry-server/0.log" Jan 29 10:09:22 crc kubenswrapper[4771]: I0129 10:09:22.234201 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-utilities/0.log" Jan 29 10:09:22 crc kubenswrapper[4771]: I0129 10:09:22.409223 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-content/0.log" Jan 29 10:09:22 crc kubenswrapper[4771]: I0129 10:09:22.440953 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-content/0.log" Jan 29 10:09:22 crc kubenswrapper[4771]: I0129 10:09:22.464680 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-utilities/0.log" Jan 29 10:09:22 crc kubenswrapper[4771]: I0129 10:09:22.667799 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-utilities/0.log" Jan 29 10:09:22 crc kubenswrapper[4771]: I0129 10:09:22.677394 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-content/0.log" Jan 29 10:09:22 crc kubenswrapper[4771]: I0129 10:09:22.936612 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ts5f5_f98a10ab-5df8-4994-b6a3-c62c3c3a8c82/marketplace-operator/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.017923 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-utilities/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.258804 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-content/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.271068 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-utilities/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.338039 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/registry-server/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.410619 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-content/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.508978 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-utilities/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.521096 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-content/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.658124 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/registry-server/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.749926 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-utilities/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.936374 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-content/0.log" Jan 29 10:09:23 crc kubenswrapper[4771]: I0129 10:09:23.964495 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-utilities/0.log" Jan 29 10:09:24 crc kubenswrapper[4771]: I0129 10:09:24.024391 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-content/0.log" Jan 29 10:09:24 crc kubenswrapper[4771]: I0129 10:09:24.155496 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-utilities/0.log" Jan 29 10:09:24 crc kubenswrapper[4771]: I0129 10:09:24.181209 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-content/0.log" Jan 29 10:09:24 crc kubenswrapper[4771]: I0129 10:09:24.705137 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/registry-server/0.log" Jan 29 10:09:26 crc kubenswrapper[4771]: I0129 10:09:26.839451 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:09:26 crc kubenswrapper[4771]: E0129 10:09:26.840480 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:09:37 crc kubenswrapper[4771]: I0129 10:09:37.839004 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:09:37 crc kubenswrapper[4771]: E0129 10:09:37.839914 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:09:51 crc kubenswrapper[4771]: I0129 10:09:51.838201 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:09:51 crc kubenswrapper[4771]: E0129 10:09:51.839206 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:09:56 crc kubenswrapper[4771]: E0129 10:09:56.430808 4771 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.98:52336->38.129.56.98:40437: write tcp 38.129.56.98:52336->38.129.56.98:40437: write: broken pipe Jan 29 10:10:04 crc kubenswrapper[4771]: I0129 10:10:04.838524 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:10:04 crc kubenswrapper[4771]: E0129 10:10:04.839312 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:10:19 crc kubenswrapper[4771]: I0129 10:10:19.839366 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:10:19 crc kubenswrapper[4771]: E0129 10:10:19.841072 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:10:34 crc kubenswrapper[4771]: I0129 10:10:34.837898 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:10:34 crc kubenswrapper[4771]: E0129 10:10:34.838633 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:10:45 crc kubenswrapper[4771]: I0129 10:10:45.838024 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:10:45 crc kubenswrapper[4771]: E0129 10:10:45.838637 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:10:56 crc kubenswrapper[4771]: I0129 10:10:56.839784 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:10:56 crc kubenswrapper[4771]: E0129 10:10:56.842621 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:11:07 crc kubenswrapper[4771]: I0129 10:11:07.839025 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:11:07 crc kubenswrapper[4771]: E0129 10:11:07.840197 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:11:10 crc kubenswrapper[4771]: I0129 10:11:10.359956 4771 generic.go:334] "Generic (PLEG): container finished" podID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerID="18066028862e3b00a9be60be2d766093af7d7e83ef4d5e7e72ebb4a66f510459" exitCode=0 Jan 29 10:11:10 crc kubenswrapper[4771]: I0129 10:11:10.360062 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z95ps/must-gather-97md4" event={"ID":"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a","Type":"ContainerDied","Data":"18066028862e3b00a9be60be2d766093af7d7e83ef4d5e7e72ebb4a66f510459"} Jan 29 10:11:10 crc kubenswrapper[4771]: I0129 10:11:10.361163 4771 scope.go:117] "RemoveContainer" containerID="18066028862e3b00a9be60be2d766093af7d7e83ef4d5e7e72ebb4a66f510459" Jan 29 10:11:10 crc kubenswrapper[4771]: I0129 10:11:10.618068 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z95ps_must-gather-97md4_34d5ae73-c10e-4227-82d7-a1a36d1dfe8a/gather/0.log" Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.092882 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z95ps/must-gather-97md4"] Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.093666 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z95ps/must-gather-97md4" podUID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerName="copy" containerID="cri-o://bba82c8488b0995ab4fb4324f4c0e1d9afa391f2974d6db7bd62123af09589a9" gracePeriod=2 Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.105230 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z95ps/must-gather-97md4"] Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.441433 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z95ps_must-gather-97md4_34d5ae73-c10e-4227-82d7-a1a36d1dfe8a/copy/0.log" Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.442061 4771 generic.go:334] "Generic (PLEG): container finished" podID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerID="bba82c8488b0995ab4fb4324f4c0e1d9afa391f2974d6db7bd62123af09589a9" exitCode=143 Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.442113 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="667d4dba07e5ac1fe85672c2ca44007cd7af1edc6642d6b67dfbbbcb295ad60e" Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.513749 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z95ps_must-gather-97md4_34d5ae73-c10e-4227-82d7-a1a36d1dfe8a/copy/0.log" Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.514137 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.657963 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-must-gather-output\") pod \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\" (UID: \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\") " Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.658107 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flb9d\" (UniqueName: \"kubernetes.io/projected/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-kube-api-access-flb9d\") pod \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\" (UID: \"34d5ae73-c10e-4227-82d7-a1a36d1dfe8a\") " Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.800046 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" (UID: "34d5ae73-c10e-4227-82d7-a1a36d1dfe8a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:11:18 crc kubenswrapper[4771]: I0129 10:11:18.864277 4771 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 10:11:19 crc kubenswrapper[4771]: I0129 10:11:19.091029 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-kube-api-access-flb9d" (OuterVolumeSpecName: "kube-api-access-flb9d") pod "34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" (UID: "34d5ae73-c10e-4227-82d7-a1a36d1dfe8a"). InnerVolumeSpecName "kube-api-access-flb9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:11:19 crc kubenswrapper[4771]: I0129 10:11:19.169208 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flb9d\" (UniqueName: \"kubernetes.io/projected/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a-kube-api-access-flb9d\") on node \"crc\" DevicePath \"\"" Jan 29 10:11:19 crc kubenswrapper[4771]: I0129 10:11:19.448877 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z95ps/must-gather-97md4" Jan 29 10:11:20 crc kubenswrapper[4771]: I0129 10:11:20.845425 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:11:20 crc kubenswrapper[4771]: E0129 10:11:20.846100 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:11:20 crc kubenswrapper[4771]: I0129 10:11:20.848203 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" path="/var/lib/kubelet/pods/34d5ae73-c10e-4227-82d7-a1a36d1dfe8a/volumes" Jan 29 10:11:32 crc kubenswrapper[4771]: I0129 10:11:32.839194 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:11:32 crc kubenswrapper[4771]: E0129 10:11:32.840060 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:11:47 crc kubenswrapper[4771]: I0129 10:11:47.839305 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:11:47 crc kubenswrapper[4771]: E0129 10:11:47.840371 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:11:59 crc kubenswrapper[4771]: I0129 10:11:59.837898 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:11:59 crc kubenswrapper[4771]: E0129 10:11:59.838855 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:12:10 crc kubenswrapper[4771]: I0129 10:12:10.861078 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:12:10 crc kubenswrapper[4771]: E0129 10:12:10.863238 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:12:24 crc kubenswrapper[4771]: I0129 10:12:24.837837 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:12:24 crc kubenswrapper[4771]: E0129 10:12:24.838537 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:12:36 crc kubenswrapper[4771]: I0129 10:12:36.838129 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:12:36 crc kubenswrapper[4771]: E0129 10:12:36.838859 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:12:46 crc kubenswrapper[4771]: I0129 10:12:46.856971 4771 scope.go:117] "RemoveContainer" containerID="bba82c8488b0995ab4fb4324f4c0e1d9afa391f2974d6db7bd62123af09589a9" Jan 29 10:12:46 crc kubenswrapper[4771]: I0129 10:12:46.899346 4771 scope.go:117] "RemoveContainer" containerID="18066028862e3b00a9be60be2d766093af7d7e83ef4d5e7e72ebb4a66f510459" Jan 29 10:12:47 crc kubenswrapper[4771]: I0129 10:12:47.000856 4771 scope.go:117] "RemoveContainer" containerID="cec6e8c3dcd8c29fbe760b56e3713616502490b83d7128bbe1b20822d7a399ef" Jan 29 10:12:50 crc kubenswrapper[4771]: I0129 10:12:50.847666 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:12:51 crc kubenswrapper[4771]: I0129 10:12:51.420200 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"5a54ed82061c8a1ebeeee137c8955ccd1a70441f4b919a09c91b46f159da93f1"} Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.179298 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9657z"] Jan 29 10:13:49 crc kubenswrapper[4771]: E0129 10:13:49.180734 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerName="extract-utilities" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.180753 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerName="extract-utilities" Jan 29 10:13:49 crc kubenswrapper[4771]: E0129 10:13:49.180787 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerName="extract-content" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.180796 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerName="extract-content" Jan 29 10:13:49 crc kubenswrapper[4771]: E0129 10:13:49.180827 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerName="gather" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.180835 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerName="gather" Jan 29 10:13:49 crc kubenswrapper[4771]: E0129 10:13:49.180848 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerName="copy" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.180857 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerName="copy" Jan 29 10:13:49 crc kubenswrapper[4771]: E0129 10:13:49.180878 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerName="registry-server" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.180887 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerName="registry-server" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.181119 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerName="copy" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.181140 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd739f7-3d9e-472a-be41-88d8d5fd6b33" containerName="registry-server" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.181175 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d5ae73-c10e-4227-82d7-a1a36d1dfe8a" containerName="gather" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.183379 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.200243 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9657z"] Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.285107 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf55b\" (UniqueName: \"kubernetes.io/projected/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-kube-api-access-nf55b\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.285425 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-catalog-content\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.285521 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-utilities\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.387713 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-utilities\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.387833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf55b\" (UniqueName: \"kubernetes.io/projected/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-kube-api-access-nf55b\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.387870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-catalog-content\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.388399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-utilities\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.388445 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-catalog-content\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.415142 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf55b\" (UniqueName: \"kubernetes.io/projected/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-kube-api-access-nf55b\") pod \"community-operators-9657z\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:49 crc kubenswrapper[4771]: I0129 10:13:49.560751 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:50 crc kubenswrapper[4771]: I0129 10:13:50.145377 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9657z"] Jan 29 10:13:51 crc kubenswrapper[4771]: I0129 10:13:51.020207 4771 generic.go:334] "Generic (PLEG): container finished" podID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerID="f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2" exitCode=0 Jan 29 10:13:51 crc kubenswrapper[4771]: I0129 10:13:51.020304 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9657z" event={"ID":"0500c1d8-f6b8-4784-b8dd-2f5769d547dc","Type":"ContainerDied","Data":"f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2"} Jan 29 10:13:51 crc kubenswrapper[4771]: I0129 10:13:51.020714 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9657z" event={"ID":"0500c1d8-f6b8-4784-b8dd-2f5769d547dc","Type":"ContainerStarted","Data":"7cd34f2659ada646743c9082577a456619701ca8b4def9694771f4762d09e157"} Jan 29 10:13:51 crc kubenswrapper[4771]: I0129 10:13:51.024958 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 10:13:52 crc kubenswrapper[4771]: I0129 10:13:52.035096 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9657z" event={"ID":"0500c1d8-f6b8-4784-b8dd-2f5769d547dc","Type":"ContainerStarted","Data":"7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88"} Jan 29 10:13:53 crc kubenswrapper[4771]: I0129 10:13:53.047807 4771 generic.go:334] "Generic (PLEG): container finished" podID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerID="7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88" exitCode=0 Jan 29 10:13:53 crc kubenswrapper[4771]: I0129 10:13:53.047879 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9657z" event={"ID":"0500c1d8-f6b8-4784-b8dd-2f5769d547dc","Type":"ContainerDied","Data":"7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88"} Jan 29 10:13:54 crc kubenswrapper[4771]: I0129 10:13:54.062670 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9657z" event={"ID":"0500c1d8-f6b8-4784-b8dd-2f5769d547dc","Type":"ContainerStarted","Data":"3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9"} Jan 29 10:13:54 crc kubenswrapper[4771]: I0129 10:13:54.090258 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9657z" podStartSLOduration=2.6434714 podStartE2EDuration="5.090239009s" podCreationTimestamp="2026-01-29 10:13:49 +0000 UTC" firstStartedPulling="2026-01-29 10:13:51.024673887 +0000 UTC m=+4051.147514114" lastFinishedPulling="2026-01-29 10:13:53.471441496 +0000 UTC m=+4053.594281723" observedRunningTime="2026-01-29 10:13:54.084168764 +0000 UTC m=+4054.207008991" watchObservedRunningTime="2026-01-29 10:13:54.090239009 +0000 UTC m=+4054.213079236" Jan 29 10:13:59 crc kubenswrapper[4771]: I0129 10:13:59.561445 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:59 crc kubenswrapper[4771]: I0129 10:13:59.564077 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9657z" Jan 29 10:13:59 crc kubenswrapper[4771]: I0129 10:13:59.630676 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9657z" Jan 29 10:14:00 crc kubenswrapper[4771]: I0129 10:14:00.184856 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9657z" Jan 29 10:14:00 crc kubenswrapper[4771]: I0129 10:14:00.250773 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9657z"] Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.152754 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9657z" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerName="registry-server" containerID="cri-o://3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9" gracePeriod=2 Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.665261 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9657z" Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.807668 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf55b\" (UniqueName: \"kubernetes.io/projected/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-kube-api-access-nf55b\") pod \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.807944 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-utilities\") pod \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.807967 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-catalog-content\") pod \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\" (UID: \"0500c1d8-f6b8-4784-b8dd-2f5769d547dc\") " Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.809235 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-utilities" (OuterVolumeSpecName: "utilities") pod "0500c1d8-f6b8-4784-b8dd-2f5769d547dc" (UID: "0500c1d8-f6b8-4784-b8dd-2f5769d547dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.816148 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-kube-api-access-nf55b" (OuterVolumeSpecName: "kube-api-access-nf55b") pod "0500c1d8-f6b8-4784-b8dd-2f5769d547dc" (UID: "0500c1d8-f6b8-4784-b8dd-2f5769d547dc"). InnerVolumeSpecName "kube-api-access-nf55b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.911093 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf55b\" (UniqueName: \"kubernetes.io/projected/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-kube-api-access-nf55b\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:02 crc kubenswrapper[4771]: I0129 10:14:02.911145 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.133871 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0500c1d8-f6b8-4784-b8dd-2f5769d547dc" (UID: "0500c1d8-f6b8-4784-b8dd-2f5769d547dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.166233 4771 generic.go:334] "Generic (PLEG): container finished" podID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerID="3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9" exitCode=0 Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.166301 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9657z" event={"ID":"0500c1d8-f6b8-4784-b8dd-2f5769d547dc","Type":"ContainerDied","Data":"3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9"} Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.166350 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9657z" event={"ID":"0500c1d8-f6b8-4784-b8dd-2f5769d547dc","Type":"ContainerDied","Data":"7cd34f2659ada646743c9082577a456619701ca8b4def9694771f4762d09e157"} Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.166361 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9657z" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.166375 4771 scope.go:117] "RemoveContainer" containerID="3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.208306 4771 scope.go:117] "RemoveContainer" containerID="7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.219069 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0500c1d8-f6b8-4784-b8dd-2f5769d547dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.219624 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9657z"] Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.233504 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9657z"] Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.235600 4771 scope.go:117] "RemoveContainer" containerID="f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.305885 4771 scope.go:117] "RemoveContainer" containerID="3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9" Jan 29 10:14:03 crc kubenswrapper[4771]: E0129 10:14:03.307479 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9\": container with ID starting with 3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9 not found: ID does not exist" containerID="3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.307559 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9"} err="failed to get container status \"3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9\": rpc error: code = NotFound desc = could not find container \"3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9\": container with ID starting with 3da4290f27563c188b1127bc74a781ab2a77af4c7c178ef74ecf786a7e2eb8b9 not found: ID does not exist" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.307602 4771 scope.go:117] "RemoveContainer" containerID="7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88" Jan 29 10:14:03 crc kubenswrapper[4771]: E0129 10:14:03.308585 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88\": container with ID starting with 7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88 not found: ID does not exist" containerID="7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.308635 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88"} err="failed to get container status \"7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88\": rpc error: code = NotFound desc = could not find container \"7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88\": container with ID starting with 7ad8fa28da23f31b6618d11832e5c67e09c44453d7ebdbbfccdeb8fe37709e88 not found: ID does not exist" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.308668 4771 scope.go:117] "RemoveContainer" containerID="f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2" Jan 29 10:14:03 crc kubenswrapper[4771]: E0129 10:14:03.309114 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2\": container with ID starting with f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2 not found: ID does not exist" containerID="f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2" Jan 29 10:14:03 crc kubenswrapper[4771]: I0129 10:14:03.309145 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2"} err="failed to get container status \"f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2\": rpc error: code = NotFound desc = could not find container \"f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2\": container with ID starting with f9b9b3d274357b7a7d0d6112b00aa68b4a1f264e1722a45b5e3cc9c3f7ed68e2 not found: ID does not exist" Jan 29 10:14:04 crc kubenswrapper[4771]: I0129 10:14:04.859989 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" path="/var/lib/kubelet/pods/0500c1d8-f6b8-4784-b8dd-2f5769d547dc/volumes" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.744613 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t7jkn/must-gather-hkkx8"] Jan 29 10:14:07 crc kubenswrapper[4771]: E0129 10:14:07.745872 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerName="registry-server" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.745886 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerName="registry-server" Jan 29 10:14:07 crc kubenswrapper[4771]: E0129 10:14:07.745895 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerName="extract-content" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.745901 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerName="extract-content" Jan 29 10:14:07 crc kubenswrapper[4771]: E0129 10:14:07.745913 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerName="extract-utilities" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.745921 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerName="extract-utilities" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.746132 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0500c1d8-f6b8-4784-b8dd-2f5769d547dc" containerName="registry-server" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.747166 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:14:07 crc kubenswrapper[4771]: W0129 10:14:07.749375 4771 reflector.go:561] object-"openshift-must-gather-t7jkn"/"default-dockercfg-r4jfc": failed to list *v1.Secret: secrets "default-dockercfg-r4jfc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-must-gather-t7jkn": no relationship found between node 'crc' and this object Jan 29 10:14:07 crc kubenswrapper[4771]: E0129 10:14:07.749468 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-must-gather-t7jkn\"/\"default-dockercfg-r4jfc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-r4jfc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-must-gather-t7jkn\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 10:14:07 crc kubenswrapper[4771]: W0129 10:14:07.749677 4771 reflector.go:561] object-"openshift-must-gather-t7jkn"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-must-gather-t7jkn": no relationship found between node 'crc' and this object Jan 29 10:14:07 crc kubenswrapper[4771]: E0129 10:14:07.749739 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-must-gather-t7jkn\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-must-gather-t7jkn\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.750184 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t7jkn"/"openshift-service-ca.crt" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.834842 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t7jkn/must-gather-hkkx8"] Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.837776 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87hsh\" (UniqueName: \"kubernetes.io/projected/8718b930-7393-4bd0-8d4f-028684732b5f-kube-api-access-87hsh\") pod \"must-gather-hkkx8\" (UID: \"8718b930-7393-4bd0-8d4f-028684732b5f\") " pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.837930 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8718b930-7393-4bd0-8d4f-028684732b5f-must-gather-output\") pod \"must-gather-hkkx8\" (UID: \"8718b930-7393-4bd0-8d4f-028684732b5f\") " pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.940034 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87hsh\" (UniqueName: \"kubernetes.io/projected/8718b930-7393-4bd0-8d4f-028684732b5f-kube-api-access-87hsh\") pod \"must-gather-hkkx8\" (UID: \"8718b930-7393-4bd0-8d4f-028684732b5f\") " pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.940121 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8718b930-7393-4bd0-8d4f-028684732b5f-must-gather-output\") pod \"must-gather-hkkx8\" (UID: \"8718b930-7393-4bd0-8d4f-028684732b5f\") " pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:14:07 crc kubenswrapper[4771]: I0129 10:14:07.940590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8718b930-7393-4bd0-8d4f-028684732b5f-must-gather-output\") pod \"must-gather-hkkx8\" (UID: \"8718b930-7393-4bd0-8d4f-028684732b5f\") " pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:14:08 crc kubenswrapper[4771]: I0129 10:14:08.787898 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t7jkn"/"kube-root-ca.crt" Jan 29 10:14:08 crc kubenswrapper[4771]: I0129 10:14:08.803269 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87hsh\" (UniqueName: \"kubernetes.io/projected/8718b930-7393-4bd0-8d4f-028684732b5f-kube-api-access-87hsh\") pod \"must-gather-hkkx8\" (UID: \"8718b930-7393-4bd0-8d4f-028684732b5f\") " pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:14:09 crc kubenswrapper[4771]: I0129 10:14:09.018890 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t7jkn"/"default-dockercfg-r4jfc" Jan 29 10:14:09 crc kubenswrapper[4771]: I0129 10:14:09.026835 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:14:09 crc kubenswrapper[4771]: I0129 10:14:09.507328 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t7jkn/must-gather-hkkx8"] Jan 29 10:14:10 crc kubenswrapper[4771]: I0129 10:14:10.236524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" event={"ID":"8718b930-7393-4bd0-8d4f-028684732b5f","Type":"ContainerStarted","Data":"0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e"} Jan 29 10:14:10 crc kubenswrapper[4771]: I0129 10:14:10.237203 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" event={"ID":"8718b930-7393-4bd0-8d4f-028684732b5f","Type":"ContainerStarted","Data":"238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce"} Jan 29 10:14:10 crc kubenswrapper[4771]: I0129 10:14:10.237215 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" event={"ID":"8718b930-7393-4bd0-8d4f-028684732b5f","Type":"ContainerStarted","Data":"b7c7c0a2c37fb5ed7562124b04f22f5e120df89f1e1cda9e6e4622f157f4b6cc"} Jan 29 10:14:10 crc kubenswrapper[4771]: I0129 10:14:10.256037 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" podStartSLOduration=3.256014531 podStartE2EDuration="3.256014531s" podCreationTimestamp="2026-01-29 10:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 10:14:10.253178904 +0000 UTC m=+4070.376019131" watchObservedRunningTime="2026-01-29 10:14:10.256014531 +0000 UTC m=+4070.378854768" Jan 29 10:14:12 crc kubenswrapper[4771]: E0129 10:14:12.409789 4771 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.98:49426->38.129.56.98:40437: write tcp 38.129.56.98:49426->38.129.56.98:40437: write: broken pipe Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.460535 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-zqf4f"] Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.462009 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.566226 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrbp\" (UniqueName: \"kubernetes.io/projected/dcc58eed-1473-4c57-ac36-05fa98896646-kube-api-access-scrbp\") pod \"crc-debug-zqf4f\" (UID: \"dcc58eed-1473-4c57-ac36-05fa98896646\") " pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.566455 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcc58eed-1473-4c57-ac36-05fa98896646-host\") pod \"crc-debug-zqf4f\" (UID: \"dcc58eed-1473-4c57-ac36-05fa98896646\") " pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.669582 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scrbp\" (UniqueName: \"kubernetes.io/projected/dcc58eed-1473-4c57-ac36-05fa98896646-kube-api-access-scrbp\") pod \"crc-debug-zqf4f\" (UID: \"dcc58eed-1473-4c57-ac36-05fa98896646\") " pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.669662 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcc58eed-1473-4c57-ac36-05fa98896646-host\") pod \"crc-debug-zqf4f\" (UID: \"dcc58eed-1473-4c57-ac36-05fa98896646\") " pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.669856 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcc58eed-1473-4c57-ac36-05fa98896646-host\") pod \"crc-debug-zqf4f\" (UID: \"dcc58eed-1473-4c57-ac36-05fa98896646\") " pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.721394 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scrbp\" (UniqueName: \"kubernetes.io/projected/dcc58eed-1473-4c57-ac36-05fa98896646-kube-api-access-scrbp\") pod \"crc-debug-zqf4f\" (UID: \"dcc58eed-1473-4c57-ac36-05fa98896646\") " pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:13 crc kubenswrapper[4771]: I0129 10:14:13.784203 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:13 crc kubenswrapper[4771]: W0129 10:14:13.837370 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcc58eed_1473_4c57_ac36_05fa98896646.slice/crio-63245153fd73229a6dae8c2e10dc2f793aae7781539b4f0d71428cc0cd2c2ae5 WatchSource:0}: Error finding container 63245153fd73229a6dae8c2e10dc2f793aae7781539b4f0d71428cc0cd2c2ae5: Status 404 returned error can't find the container with id 63245153fd73229a6dae8c2e10dc2f793aae7781539b4f0d71428cc0cd2c2ae5 Jan 29 10:14:14 crc kubenswrapper[4771]: I0129 10:14:14.300569 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" event={"ID":"dcc58eed-1473-4c57-ac36-05fa98896646","Type":"ContainerStarted","Data":"b555bd84aad10e833b265478d1223f763bc05c043f13c5562b9656e47dc343fe"} Jan 29 10:14:14 crc kubenswrapper[4771]: I0129 10:14:14.300948 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" event={"ID":"dcc58eed-1473-4c57-ac36-05fa98896646","Type":"ContainerStarted","Data":"63245153fd73229a6dae8c2e10dc2f793aae7781539b4f0d71428cc0cd2c2ae5"} Jan 29 10:14:14 crc kubenswrapper[4771]: I0129 10:14:14.335277 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" podStartSLOduration=1.335252007 podStartE2EDuration="1.335252007s" podCreationTimestamp="2026-01-29 10:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 10:14:14.326496019 +0000 UTC m=+4074.449336246" watchObservedRunningTime="2026-01-29 10:14:14.335252007 +0000 UTC m=+4074.458092224" Jan 29 10:14:45 crc kubenswrapper[4771]: I0129 10:14:45.750675 4771 generic.go:334] "Generic (PLEG): container finished" podID="dcc58eed-1473-4c57-ac36-05fa98896646" containerID="b555bd84aad10e833b265478d1223f763bc05c043f13c5562b9656e47dc343fe" exitCode=0 Jan 29 10:14:45 crc kubenswrapper[4771]: I0129 10:14:45.750746 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" event={"ID":"dcc58eed-1473-4c57-ac36-05fa98896646","Type":"ContainerDied","Data":"b555bd84aad10e833b265478d1223f763bc05c043f13c5562b9656e47dc343fe"} Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.325651 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xrj5t"] Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.327689 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.353152 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xrj5t"] Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.433357 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-utilities\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.433546 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstwt\" (UniqueName: \"kubernetes.io/projected/c06bf324-ada0-4a87-b52b-b0e5306ec869-kube-api-access-kstwt\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.433621 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-catalog-content\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.535858 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kstwt\" (UniqueName: \"kubernetes.io/projected/c06bf324-ada0-4a87-b52b-b0e5306ec869-kube-api-access-kstwt\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.535929 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-catalog-content\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.535989 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-utilities\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.536418 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-catalog-content\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.536453 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-utilities\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.555205 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kstwt\" (UniqueName: \"kubernetes.io/projected/c06bf324-ada0-4a87-b52b-b0e5306ec869-kube-api-access-kstwt\") pod \"redhat-operators-xrj5t\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.650395 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.886122 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.939258 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-zqf4f"] Jan 29 10:14:46 crc kubenswrapper[4771]: I0129 10:14:46.948508 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-zqf4f"] Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.044963 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcc58eed-1473-4c57-ac36-05fa98896646-host\") pod \"dcc58eed-1473-4c57-ac36-05fa98896646\" (UID: \"dcc58eed-1473-4c57-ac36-05fa98896646\") " Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.045046 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scrbp\" (UniqueName: \"kubernetes.io/projected/dcc58eed-1473-4c57-ac36-05fa98896646-kube-api-access-scrbp\") pod \"dcc58eed-1473-4c57-ac36-05fa98896646\" (UID: \"dcc58eed-1473-4c57-ac36-05fa98896646\") " Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.046039 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcc58eed-1473-4c57-ac36-05fa98896646-host" (OuterVolumeSpecName: "host") pod "dcc58eed-1473-4c57-ac36-05fa98896646" (UID: "dcc58eed-1473-4c57-ac36-05fa98896646"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.052342 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc58eed-1473-4c57-ac36-05fa98896646-kube-api-access-scrbp" (OuterVolumeSpecName: "kube-api-access-scrbp") pod "dcc58eed-1473-4c57-ac36-05fa98896646" (UID: "dcc58eed-1473-4c57-ac36-05fa98896646"). InnerVolumeSpecName "kube-api-access-scrbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.149058 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcc58eed-1473-4c57-ac36-05fa98896646-host\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.149087 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scrbp\" (UniqueName: \"kubernetes.io/projected/dcc58eed-1473-4c57-ac36-05fa98896646-kube-api-access-scrbp\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.170462 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xrj5t"] Jan 29 10:14:47 crc kubenswrapper[4771]: W0129 10:14:47.189088 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc06bf324_ada0_4a87_b52b_b0e5306ec869.slice/crio-f4e1fb391a73e845c66873621e43510e2b01685ae629c1244409093f12c1d4c8 WatchSource:0}: Error finding container f4e1fb391a73e845c66873621e43510e2b01685ae629c1244409093f12c1d4c8: Status 404 returned error can't find the container with id f4e1fb391a73e845c66873621e43510e2b01685ae629c1244409093f12c1d4c8 Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.771430 4771 generic.go:334] "Generic (PLEG): container finished" podID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerID="568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff" exitCode=0 Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.771482 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrj5t" event={"ID":"c06bf324-ada0-4a87-b52b-b0e5306ec869","Type":"ContainerDied","Data":"568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff"} Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.771814 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrj5t" event={"ID":"c06bf324-ada0-4a87-b52b-b0e5306ec869","Type":"ContainerStarted","Data":"f4e1fb391a73e845c66873621e43510e2b01685ae629c1244409093f12c1d4c8"} Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.774010 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63245153fd73229a6dae8c2e10dc2f793aae7781539b4f0d71428cc0cd2c2ae5" Jan 29 10:14:47 crc kubenswrapper[4771]: I0129 10:14:47.774127 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-zqf4f" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.148294 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-xdxkj"] Jan 29 10:14:48 crc kubenswrapper[4771]: E0129 10:14:48.149617 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc58eed-1473-4c57-ac36-05fa98896646" containerName="container-00" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.149637 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc58eed-1473-4c57-ac36-05fa98896646" containerName="container-00" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.149852 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc58eed-1473-4c57-ac36-05fa98896646" containerName="container-00" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.150660 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.275571 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8791ab51-9a21-4550-b4da-91b72cafeb0f-host\") pod \"crc-debug-xdxkj\" (UID: \"8791ab51-9a21-4550-b4da-91b72cafeb0f\") " pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.275733 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjqh\" (UniqueName: \"kubernetes.io/projected/8791ab51-9a21-4550-b4da-91b72cafeb0f-kube-api-access-7sjqh\") pod \"crc-debug-xdxkj\" (UID: \"8791ab51-9a21-4550-b4da-91b72cafeb0f\") " pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.377934 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8791ab51-9a21-4550-b4da-91b72cafeb0f-host\") pod \"crc-debug-xdxkj\" (UID: \"8791ab51-9a21-4550-b4da-91b72cafeb0f\") " pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.378038 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjqh\" (UniqueName: \"kubernetes.io/projected/8791ab51-9a21-4550-b4da-91b72cafeb0f-kube-api-access-7sjqh\") pod \"crc-debug-xdxkj\" (UID: \"8791ab51-9a21-4550-b4da-91b72cafeb0f\") " pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.378087 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8791ab51-9a21-4550-b4da-91b72cafeb0f-host\") pod \"crc-debug-xdxkj\" (UID: \"8791ab51-9a21-4550-b4da-91b72cafeb0f\") " pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.793395 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjqh\" (UniqueName: \"kubernetes.io/projected/8791ab51-9a21-4550-b4da-91b72cafeb0f-kube-api-access-7sjqh\") pod \"crc-debug-xdxkj\" (UID: \"8791ab51-9a21-4550-b4da-91b72cafeb0f\") " pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:48 crc kubenswrapper[4771]: I0129 10:14:48.852995 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc58eed-1473-4c57-ac36-05fa98896646" path="/var/lib/kubelet/pods/dcc58eed-1473-4c57-ac36-05fa98896646/volumes" Jan 29 10:14:49 crc kubenswrapper[4771]: I0129 10:14:49.069564 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:49 crc kubenswrapper[4771]: W0129 10:14:49.103179 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8791ab51_9a21_4550_b4da_91b72cafeb0f.slice/crio-185a21bf632475a208d649a02a6cde224308606db16450300ab88ebfe1e2f223 WatchSource:0}: Error finding container 185a21bf632475a208d649a02a6cde224308606db16450300ab88ebfe1e2f223: Status 404 returned error can't find the container with id 185a21bf632475a208d649a02a6cde224308606db16450300ab88ebfe1e2f223 Jan 29 10:14:49 crc kubenswrapper[4771]: I0129 10:14:49.796039 4771 generic.go:334] "Generic (PLEG): container finished" podID="8791ab51-9a21-4550-b4da-91b72cafeb0f" containerID="d78ba373b22a15b134e0be9b47d53757d95d3befb74822e752b3fd2e546545ae" exitCode=0 Jan 29 10:14:49 crc kubenswrapper[4771]: I0129 10:14:49.796155 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" event={"ID":"8791ab51-9a21-4550-b4da-91b72cafeb0f","Type":"ContainerDied","Data":"d78ba373b22a15b134e0be9b47d53757d95d3befb74822e752b3fd2e546545ae"} Jan 29 10:14:49 crc kubenswrapper[4771]: I0129 10:14:49.796441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" event={"ID":"8791ab51-9a21-4550-b4da-91b72cafeb0f","Type":"ContainerStarted","Data":"185a21bf632475a208d649a02a6cde224308606db16450300ab88ebfe1e2f223"} Jan 29 10:14:49 crc kubenswrapper[4771]: I0129 10:14:49.798655 4771 generic.go:334] "Generic (PLEG): container finished" podID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerID="c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b" exitCode=0 Jan 29 10:14:49 crc kubenswrapper[4771]: I0129 10:14:49.798727 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrj5t" event={"ID":"c06bf324-ada0-4a87-b52b-b0e5306ec869","Type":"ContainerDied","Data":"c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b"} Jan 29 10:14:50 crc kubenswrapper[4771]: I0129 10:14:50.282560 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-xdxkj"] Jan 29 10:14:50 crc kubenswrapper[4771]: I0129 10:14:50.291126 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-xdxkj"] Jan 29 10:14:50 crc kubenswrapper[4771]: I0129 10:14:50.814740 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrj5t" event={"ID":"c06bf324-ada0-4a87-b52b-b0e5306ec869","Type":"ContainerStarted","Data":"24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94"} Jan 29 10:14:50 crc kubenswrapper[4771]: I0129 10:14:50.833925 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xrj5t" podStartSLOduration=2.428633677 podStartE2EDuration="4.833908796s" podCreationTimestamp="2026-01-29 10:14:46 +0000 UTC" firstStartedPulling="2026-01-29 10:14:47.773623828 +0000 UTC m=+4107.896464055" lastFinishedPulling="2026-01-29 10:14:50.178898947 +0000 UTC m=+4110.301739174" observedRunningTime="2026-01-29 10:14:50.830377159 +0000 UTC m=+4110.953217406" watchObservedRunningTime="2026-01-29 10:14:50.833908796 +0000 UTC m=+4110.956749023" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.175360 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.348968 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8791ab51-9a21-4550-b4da-91b72cafeb0f-host\") pod \"8791ab51-9a21-4550-b4da-91b72cafeb0f\" (UID: \"8791ab51-9a21-4550-b4da-91b72cafeb0f\") " Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.349110 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8791ab51-9a21-4550-b4da-91b72cafeb0f-host" (OuterVolumeSpecName: "host") pod "8791ab51-9a21-4550-b4da-91b72cafeb0f" (UID: "8791ab51-9a21-4550-b4da-91b72cafeb0f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.349206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjqh\" (UniqueName: \"kubernetes.io/projected/8791ab51-9a21-4550-b4da-91b72cafeb0f-kube-api-access-7sjqh\") pod \"8791ab51-9a21-4550-b4da-91b72cafeb0f\" (UID: \"8791ab51-9a21-4550-b4da-91b72cafeb0f\") " Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.349633 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8791ab51-9a21-4550-b4da-91b72cafeb0f-host\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.359894 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8791ab51-9a21-4550-b4da-91b72cafeb0f-kube-api-access-7sjqh" (OuterVolumeSpecName: "kube-api-access-7sjqh") pod "8791ab51-9a21-4550-b4da-91b72cafeb0f" (UID: "8791ab51-9a21-4550-b4da-91b72cafeb0f"). InnerVolumeSpecName "kube-api-access-7sjqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.451942 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sjqh\" (UniqueName: \"kubernetes.io/projected/8791ab51-9a21-4550-b4da-91b72cafeb0f-kube-api-access-7sjqh\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.480211 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-69b4g"] Jan 29 10:14:51 crc kubenswrapper[4771]: E0129 10:14:51.480875 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8791ab51-9a21-4550-b4da-91b72cafeb0f" containerName="container-00" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.480900 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8791ab51-9a21-4550-b4da-91b72cafeb0f" containerName="container-00" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.481160 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8791ab51-9a21-4550-b4da-91b72cafeb0f" containerName="container-00" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.481911 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.656151 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b6dd832-24d6-4d8c-8415-97fc36d113a0-host\") pod \"crc-debug-69b4g\" (UID: \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\") " pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.656223 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpzg\" (UniqueName: \"kubernetes.io/projected/2b6dd832-24d6-4d8c-8415-97fc36d113a0-kube-api-access-4dpzg\") pod \"crc-debug-69b4g\" (UID: \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\") " pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.757733 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b6dd832-24d6-4d8c-8415-97fc36d113a0-host\") pod \"crc-debug-69b4g\" (UID: \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\") " pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.757829 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpzg\" (UniqueName: \"kubernetes.io/projected/2b6dd832-24d6-4d8c-8415-97fc36d113a0-kube-api-access-4dpzg\") pod \"crc-debug-69b4g\" (UID: \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\") " pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.757907 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b6dd832-24d6-4d8c-8415-97fc36d113a0-host\") pod \"crc-debug-69b4g\" (UID: \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\") " pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.777400 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpzg\" (UniqueName: \"kubernetes.io/projected/2b6dd832-24d6-4d8c-8415-97fc36d113a0-kube-api-access-4dpzg\") pod \"crc-debug-69b4g\" (UID: \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\") " pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.801371 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.823683 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-xdxkj" Jan 29 10:14:51 crc kubenswrapper[4771]: I0129 10:14:51.824374 4771 scope.go:117] "RemoveContainer" containerID="d78ba373b22a15b134e0be9b47d53757d95d3befb74822e752b3fd2e546545ae" Jan 29 10:14:52 crc kubenswrapper[4771]: I0129 10:14:52.833651 4771 generic.go:334] "Generic (PLEG): container finished" podID="2b6dd832-24d6-4d8c-8415-97fc36d113a0" containerID="d688841db33058da01bc65a7e8d21f518294d1a3e1ca834e71ef1c89363821b4" exitCode=0 Jan 29 10:14:52 crc kubenswrapper[4771]: I0129 10:14:52.833755 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/crc-debug-69b4g" event={"ID":"2b6dd832-24d6-4d8c-8415-97fc36d113a0","Type":"ContainerDied","Data":"d688841db33058da01bc65a7e8d21f518294d1a3e1ca834e71ef1c89363821b4"} Jan 29 10:14:52 crc kubenswrapper[4771]: I0129 10:14:52.834271 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/crc-debug-69b4g" event={"ID":"2b6dd832-24d6-4d8c-8415-97fc36d113a0","Type":"ContainerStarted","Data":"fe8054f9c46787489a794cb4780393ecdfc3f6b3247ce387779202da2f9687c7"} Jan 29 10:14:52 crc kubenswrapper[4771]: I0129 10:14:52.849972 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8791ab51-9a21-4550-b4da-91b72cafeb0f" path="/var/lib/kubelet/pods/8791ab51-9a21-4550-b4da-91b72cafeb0f/volumes" Jan 29 10:14:52 crc kubenswrapper[4771]: I0129 10:14:52.914630 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-69b4g"] Jan 29 10:14:52 crc kubenswrapper[4771]: I0129 10:14:52.926213 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t7jkn/crc-debug-69b4g"] Jan 29 10:14:53 crc kubenswrapper[4771]: I0129 10:14:53.973925 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.112356 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dpzg\" (UniqueName: \"kubernetes.io/projected/2b6dd832-24d6-4d8c-8415-97fc36d113a0-kube-api-access-4dpzg\") pod \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\" (UID: \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\") " Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.112593 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b6dd832-24d6-4d8c-8415-97fc36d113a0-host\") pod \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\" (UID: \"2b6dd832-24d6-4d8c-8415-97fc36d113a0\") " Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.112802 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b6dd832-24d6-4d8c-8415-97fc36d113a0-host" (OuterVolumeSpecName: "host") pod "2b6dd832-24d6-4d8c-8415-97fc36d113a0" (UID: "2b6dd832-24d6-4d8c-8415-97fc36d113a0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.114114 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b6dd832-24d6-4d8c-8415-97fc36d113a0-host\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.119091 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6dd832-24d6-4d8c-8415-97fc36d113a0-kube-api-access-4dpzg" (OuterVolumeSpecName: "kube-api-access-4dpzg") pod "2b6dd832-24d6-4d8c-8415-97fc36d113a0" (UID: "2b6dd832-24d6-4d8c-8415-97fc36d113a0"). InnerVolumeSpecName "kube-api-access-4dpzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.216796 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dpzg\" (UniqueName: \"kubernetes.io/projected/2b6dd832-24d6-4d8c-8415-97fc36d113a0-kube-api-access-4dpzg\") on node \"crc\" DevicePath \"\"" Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.850535 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6dd832-24d6-4d8c-8415-97fc36d113a0" path="/var/lib/kubelet/pods/2b6dd832-24d6-4d8c-8415-97fc36d113a0/volumes" Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.854864 4771 scope.go:117] "RemoveContainer" containerID="d688841db33058da01bc65a7e8d21f518294d1a3e1ca834e71ef1c89363821b4" Jan 29 10:14:54 crc kubenswrapper[4771]: I0129 10:14:54.855172 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/crc-debug-69b4g" Jan 29 10:14:56 crc kubenswrapper[4771]: I0129 10:14:56.651665 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:56 crc kubenswrapper[4771]: I0129 10:14:56.652306 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:56 crc kubenswrapper[4771]: I0129 10:14:56.697501 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:56 crc kubenswrapper[4771]: I0129 10:14:56.936091 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:56 crc kubenswrapper[4771]: I0129 10:14:56.979445 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xrj5t"] Jan 29 10:14:58 crc kubenswrapper[4771]: I0129 10:14:58.891418 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xrj5t" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerName="registry-server" containerID="cri-o://24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94" gracePeriod=2 Jan 29 10:14:59 crc kubenswrapper[4771]: I0129 10:14:59.905642 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:14:59 crc kubenswrapper[4771]: I0129 10:14:59.906463 4771 generic.go:334] "Generic (PLEG): container finished" podID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerID="24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94" exitCode=0 Jan 29 10:14:59 crc kubenswrapper[4771]: I0129 10:14:59.906507 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrj5t" event={"ID":"c06bf324-ada0-4a87-b52b-b0e5306ec869","Type":"ContainerDied","Data":"24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94"} Jan 29 10:14:59 crc kubenswrapper[4771]: I0129 10:14:59.906546 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrj5t" event={"ID":"c06bf324-ada0-4a87-b52b-b0e5306ec869","Type":"ContainerDied","Data":"f4e1fb391a73e845c66873621e43510e2b01685ae629c1244409093f12c1d4c8"} Jan 29 10:14:59 crc kubenswrapper[4771]: I0129 10:14:59.906566 4771 scope.go:117] "RemoveContainer" containerID="24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94" Jan 29 10:14:59 crc kubenswrapper[4771]: I0129 10:14:59.937959 4771 scope.go:117] "RemoveContainer" containerID="c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b" Jan 29 10:14:59 crc kubenswrapper[4771]: I0129 10:14:59.975950 4771 scope.go:117] "RemoveContainer" containerID="568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.015351 4771 scope.go:117] "RemoveContainer" containerID="24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94" Jan 29 10:15:00 crc kubenswrapper[4771]: E0129 10:15:00.015960 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94\": container with ID starting with 24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94 not found: ID does not exist" containerID="24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.016013 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94"} err="failed to get container status \"24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94\": rpc error: code = NotFound desc = could not find container \"24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94\": container with ID starting with 24a8b1c18288e35b9863e0c32d4dfdc2d285bcb000a6feec91c07f5acdaebd94 not found: ID does not exist" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.016046 4771 scope.go:117] "RemoveContainer" containerID="c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b" Jan 29 10:15:00 crc kubenswrapper[4771]: E0129 10:15:00.016488 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b\": container with ID starting with c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b not found: ID does not exist" containerID="c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.016532 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b"} err="failed to get container status \"c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b\": rpc error: code = NotFound desc = could not find container \"c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b\": container with ID starting with c279cd3ed42db63f7422d624f5de5d83969e11a7a98a51a60a5212ad6fe0c17b not found: ID does not exist" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.016562 4771 scope.go:117] "RemoveContainer" containerID="568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff" Jan 29 10:15:00 crc kubenswrapper[4771]: E0129 10:15:00.017016 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff\": container with ID starting with 568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff not found: ID does not exist" containerID="568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.017102 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff"} err="failed to get container status \"568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff\": rpc error: code = NotFound desc = could not find container \"568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff\": container with ID starting with 568b7fcc62395edadabdc4ea5a7118755c4ab7cd76f5959b2f16e6401e781eff not found: ID does not exist" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.030752 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-catalog-content\") pod \"c06bf324-ada0-4a87-b52b-b0e5306ec869\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.030876 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kstwt\" (UniqueName: \"kubernetes.io/projected/c06bf324-ada0-4a87-b52b-b0e5306ec869-kube-api-access-kstwt\") pod \"c06bf324-ada0-4a87-b52b-b0e5306ec869\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.031089 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-utilities\") pod \"c06bf324-ada0-4a87-b52b-b0e5306ec869\" (UID: \"c06bf324-ada0-4a87-b52b-b0e5306ec869\") " Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.032112 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-utilities" (OuterVolumeSpecName: "utilities") pod "c06bf324-ada0-4a87-b52b-b0e5306ec869" (UID: "c06bf324-ada0-4a87-b52b-b0e5306ec869"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.036350 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06bf324-ada0-4a87-b52b-b0e5306ec869-kube-api-access-kstwt" (OuterVolumeSpecName: "kube-api-access-kstwt") pod "c06bf324-ada0-4a87-b52b-b0e5306ec869" (UID: "c06bf324-ada0-4a87-b52b-b0e5306ec869"). InnerVolumeSpecName "kube-api-access-kstwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.135191 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.135223 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kstwt\" (UniqueName: \"kubernetes.io/projected/c06bf324-ada0-4a87-b52b-b0e5306ec869-kube-api-access-kstwt\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.176995 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c06bf324-ada0-4a87-b52b-b0e5306ec869" (UID: "c06bf324-ada0-4a87-b52b-b0e5306ec869"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.179207 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl"] Jan 29 10:15:00 crc kubenswrapper[4771]: E0129 10:15:00.179739 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6dd832-24d6-4d8c-8415-97fc36d113a0" containerName="container-00" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.179762 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6dd832-24d6-4d8c-8415-97fc36d113a0" containerName="container-00" Jan 29 10:15:00 crc kubenswrapper[4771]: E0129 10:15:00.179786 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerName="extract-content" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.179795 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerName="extract-content" Jan 29 10:15:00 crc kubenswrapper[4771]: E0129 10:15:00.179816 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerName="extract-utilities" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.179826 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerName="extract-utilities" Jan 29 10:15:00 crc kubenswrapper[4771]: E0129 10:15:00.179851 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerName="registry-server" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.179859 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerName="registry-server" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.180089 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" containerName="registry-server" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.180119 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6dd832-24d6-4d8c-8415-97fc36d113a0" containerName="container-00" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.180928 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.186842 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.187396 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.193032 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl"] Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.236997 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c06bf324-ada0-4a87-b52b-b0e5306ec869-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.338767 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkzm\" (UniqueName: \"kubernetes.io/projected/e903b0cd-392a-48f9-b437-0ef7ec71270f-kube-api-access-lgkzm\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.338904 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e903b0cd-392a-48f9-b437-0ef7ec71270f-config-volume\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.339109 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e903b0cd-392a-48f9-b437-0ef7ec71270f-secret-volume\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.441340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e903b0cd-392a-48f9-b437-0ef7ec71270f-secret-volume\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.441481 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgkzm\" (UniqueName: \"kubernetes.io/projected/e903b0cd-392a-48f9-b437-0ef7ec71270f-kube-api-access-lgkzm\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.441510 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e903b0cd-392a-48f9-b437-0ef7ec71270f-config-volume\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.442546 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e903b0cd-392a-48f9-b437-0ef7ec71270f-config-volume\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.445065 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e903b0cd-392a-48f9-b437-0ef7ec71270f-secret-volume\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.460181 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgkzm\" (UniqueName: \"kubernetes.io/projected/e903b0cd-392a-48f9-b437-0ef7ec71270f-kube-api-access-lgkzm\") pod \"collect-profiles-29494695-zflwl\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.535364 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.916566 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xrj5t" Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.939191 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xrj5t"] Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.947724 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xrj5t"] Jan 29 10:15:00 crc kubenswrapper[4771]: I0129 10:15:00.991376 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl"] Jan 29 10:15:01 crc kubenswrapper[4771]: I0129 10:15:01.926439 4771 generic.go:334] "Generic (PLEG): container finished" podID="e903b0cd-392a-48f9-b437-0ef7ec71270f" containerID="34f18e9a1d4716bfd0cd4bee8ab821038c61ec36e9ffb626359212a3ae19bd6d" exitCode=0 Jan 29 10:15:01 crc kubenswrapper[4771]: I0129 10:15:01.926488 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" event={"ID":"e903b0cd-392a-48f9-b437-0ef7ec71270f","Type":"ContainerDied","Data":"34f18e9a1d4716bfd0cd4bee8ab821038c61ec36e9ffb626359212a3ae19bd6d"} Jan 29 10:15:01 crc kubenswrapper[4771]: I0129 10:15:01.926946 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" event={"ID":"e903b0cd-392a-48f9-b437-0ef7ec71270f","Type":"ContainerStarted","Data":"4e69d551fecf8cf60b61eeafdeefc31615248b44022bb3bac51764ffd5d0893e"} Jan 29 10:15:02 crc kubenswrapper[4771]: I0129 10:15:02.848354 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06bf324-ada0-4a87-b52b-b0e5306ec869" path="/var/lib/kubelet/pods/c06bf324-ada0-4a87-b52b-b0e5306ec869/volumes" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.338609 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.397273 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e903b0cd-392a-48f9-b437-0ef7ec71270f-config-volume\") pod \"e903b0cd-392a-48f9-b437-0ef7ec71270f\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.397333 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgkzm\" (UniqueName: \"kubernetes.io/projected/e903b0cd-392a-48f9-b437-0ef7ec71270f-kube-api-access-lgkzm\") pod \"e903b0cd-392a-48f9-b437-0ef7ec71270f\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.397547 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e903b0cd-392a-48f9-b437-0ef7ec71270f-secret-volume\") pod \"e903b0cd-392a-48f9-b437-0ef7ec71270f\" (UID: \"e903b0cd-392a-48f9-b437-0ef7ec71270f\") " Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.397974 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e903b0cd-392a-48f9-b437-0ef7ec71270f-config-volume" (OuterVolumeSpecName: "config-volume") pod "e903b0cd-392a-48f9-b437-0ef7ec71270f" (UID: "e903b0cd-392a-48f9-b437-0ef7ec71270f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.402983 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e903b0cd-392a-48f9-b437-0ef7ec71270f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e903b0cd-392a-48f9-b437-0ef7ec71270f" (UID: "e903b0cd-392a-48f9-b437-0ef7ec71270f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.403005 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e903b0cd-392a-48f9-b437-0ef7ec71270f-kube-api-access-lgkzm" (OuterVolumeSpecName: "kube-api-access-lgkzm") pod "e903b0cd-392a-48f9-b437-0ef7ec71270f" (UID: "e903b0cd-392a-48f9-b437-0ef7ec71270f"). InnerVolumeSpecName "kube-api-access-lgkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.499433 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e903b0cd-392a-48f9-b437-0ef7ec71270f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.499473 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e903b0cd-392a-48f9-b437-0ef7ec71270f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.499486 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgkzm\" (UniqueName: \"kubernetes.io/projected/e903b0cd-392a-48f9-b437-0ef7ec71270f-kube-api-access-lgkzm\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.945901 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" event={"ID":"e903b0cd-392a-48f9-b437-0ef7ec71270f","Type":"ContainerDied","Data":"4e69d551fecf8cf60b61eeafdeefc31615248b44022bb3bac51764ffd5d0893e"} Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.945950 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e69d551fecf8cf60b61eeafdeefc31615248b44022bb3bac51764ffd5d0893e" Jan 29 10:15:03 crc kubenswrapper[4771]: I0129 10:15:03.945954 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494695-zflwl" Jan 29 10:15:04 crc kubenswrapper[4771]: I0129 10:15:04.408364 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6"] Jan 29 10:15:04 crc kubenswrapper[4771]: I0129 10:15:04.417003 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-tknk6"] Jan 29 10:15:04 crc kubenswrapper[4771]: I0129 10:15:04.852324 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c1710e-3c07-44fb-9ab4-4d346e0f02e3" path="/var/lib/kubelet/pods/16c1710e-3c07-44fb-9ab4-4d346e0f02e3/volumes" Jan 29 10:15:14 crc kubenswrapper[4771]: I0129 10:15:14.271372 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:15:14 crc kubenswrapper[4771]: I0129 10:15:14.272152 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:15:34 crc kubenswrapper[4771]: I0129 10:15:34.542109 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7484874686-s4fjd_25d2516d-24c8-400e-acd4-d35b384046bb/barbican-api/0.log" Jan 29 10:15:34 crc kubenswrapper[4771]: I0129 10:15:34.664347 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7484874686-s4fjd_25d2516d-24c8-400e-acd4-d35b384046bb/barbican-api-log/0.log" Jan 29 10:15:34 crc kubenswrapper[4771]: I0129 10:15:34.680005 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c444788cd-vwtrs_49846e1c-6efb-4d4f-875c-ab051d11de09/barbican-keystone-listener/0.log" Jan 29 10:15:34 crc kubenswrapper[4771]: I0129 10:15:34.795910 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6c444788cd-vwtrs_49846e1c-6efb-4d4f-875c-ab051d11de09/barbican-keystone-listener-log/0.log" Jan 29 10:15:34 crc kubenswrapper[4771]: I0129 10:15:34.875575 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5575df4f89-jlhn2_bef7ac33-b62c-4372-a3f2-98b951265ef3/barbican-worker/0.log" Jan 29 10:15:34 crc kubenswrapper[4771]: I0129 10:15:34.924082 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5575df4f89-jlhn2_bef7ac33-b62c-4372-a3f2-98b951265ef3/barbican-worker-log/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.068063 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nglsq_d2af364a-dc24-46dc-bd14-8ad420af1812/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.137758 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/ceilometer-central-agent/1.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.276012 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/ceilometer-notification-agent/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.276841 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/ceilometer-central-agent/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.319659 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/proxy-httpd/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.322856 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1ea3117f-141f-46c2-bee3-71a88181068c/sg-core/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.493158 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c483fbc3-2b55-4c4e-bb34-600f2fe18bd2/cinder-api/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.518419 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c483fbc3-2b55-4c4e-bb34-600f2fe18bd2/cinder-api-log/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.700280 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0c71ced2-21b6-42f9-bcf8-1d844b6402ab/probe/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.722525 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qsr8s_620aab50-8510-4a95-a53c-7dd7fac714b6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.771380 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0c71ced2-21b6-42f9-bcf8-1d844b6402ab/cinder-scheduler/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.925837 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7jpcs_880c583d-29a4-44e0-83b0-16795d5eac98/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:35 crc kubenswrapper[4771]: I0129 10:15:35.957213 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-578c4b6ff9-9qfgr_b23b9082-b814-455c-a31b-8df578081bf4/init/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.192048 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-578c4b6ff9-9qfgr_b23b9082-b814-455c-a31b-8df578081bf4/dnsmasq-dns/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.199149 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-578c4b6ff9-9qfgr_b23b9082-b814-455c-a31b-8df578081bf4/init/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.211546 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2gvwl_8c4903c9-2b1b-4e50-9e27-49c4aa41974c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.388364 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_afe8ac11-63a3-4c6f-b46b-a8a79ba8e027/glance-log/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.417674 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_afe8ac11-63a3-4c6f-b46b-a8a79ba8e027/glance-httpd/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.618916 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fvh6d"] Jan 29 10:15:36 crc kubenswrapper[4771]: E0129 10:15:36.619517 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e903b0cd-392a-48f9-b437-0ef7ec71270f" containerName="collect-profiles" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.619541 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e903b0cd-392a-48f9-b437-0ef7ec71270f" containerName="collect-profiles" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.619818 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e903b0cd-392a-48f9-b437-0ef7ec71270f" containerName="collect-profiles" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.621617 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.642342 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvh6d"] Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.668320 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d8b13b11-fc7b-4228-b762-27c0ae94ae33/glance-httpd/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.696183 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d8b13b11-fc7b-4228-b762-27c0ae94ae33/glance-log/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.715559 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67d9579b5b-l9trm_3d093a30-424c-4a0c-a749-7a47328c4b2d/horizon/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.802064 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-utilities\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.802133 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-catalog-content\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.802988 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwvkg\" (UniqueName: \"kubernetes.io/projected/d8183582-ddce-4f45-936d-6be11c643eaa-kube-api-access-mwvkg\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.903026 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kwh7d_5fc93c8e-ca3e-403c-b42e-fea90628e728/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.904378 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-utilities\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.904465 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-catalog-content\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.904544 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwvkg\" (UniqueName: \"kubernetes.io/projected/d8183582-ddce-4f45-936d-6be11c643eaa-kube-api-access-mwvkg\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.904836 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-utilities\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.905140 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-catalog-content\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.927402 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwvkg\" (UniqueName: \"kubernetes.io/projected/d8183582-ddce-4f45-936d-6be11c643eaa-kube-api-access-mwvkg\") pod \"redhat-marketplace-fvh6d\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:36 crc kubenswrapper[4771]: I0129 10:15:36.943091 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:37 crc kubenswrapper[4771]: I0129 10:15:37.289019 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zb8mb_72d4ad1e-f80c-43d6-a515-3f08c23df279/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:37 crc kubenswrapper[4771]: I0129 10:15:37.304952 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67d9579b5b-l9trm_3d093a30-424c-4a0c-a749-7a47328c4b2d/horizon-log/0.log" Jan 29 10:15:37 crc kubenswrapper[4771]: I0129 10:15:37.419860 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvh6d"] Jan 29 10:15:37 crc kubenswrapper[4771]: I0129 10:15:37.500137 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-67ddc4bf8b-n46xf_f202e04b-5581-45cc-9b76-da029ee47b31/keystone-api/0.log" Jan 29 10:15:37 crc kubenswrapper[4771]: I0129 10:15:37.579187 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494681-2s5hx_c0370c10-53ad-4d77-8869-f5c727a41d8c/keystone-cron/0.log" Jan 29 10:15:37 crc kubenswrapper[4771]: I0129 10:15:37.673724 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cd7da89d-e16f-404a-b1dd-ccaaa3069431/kube-state-metrics/0.log" Jan 29 10:15:37 crc kubenswrapper[4771]: I0129 10:15:37.802820 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tw2th_8cb7a0bd-4a49-4b9c-ae51-86219526db00/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:38 crc kubenswrapper[4771]: I0129 10:15:38.157206 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78b5bf9d6f-2454r_814be124-8d28-4fa9-b792-9d6561d105f9/neutron-api/0.log" Jan 29 10:15:38 crc kubenswrapper[4771]: I0129 10:15:38.185120 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78b5bf9d6f-2454r_814be124-8d28-4fa9-b792-9d6561d105f9/neutron-httpd/0.log" Jan 29 10:15:38 crc kubenswrapper[4771]: I0129 10:15:38.272235 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-g75cs_262d9611-9da4-4ea4-82ab-abbcaab91a0d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:38 crc kubenswrapper[4771]: I0129 10:15:38.312425 4771 generic.go:334] "Generic (PLEG): container finished" podID="d8183582-ddce-4f45-936d-6be11c643eaa" containerID="7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd" exitCode=0 Jan 29 10:15:38 crc kubenswrapper[4771]: I0129 10:15:38.312471 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvh6d" event={"ID":"d8183582-ddce-4f45-936d-6be11c643eaa","Type":"ContainerDied","Data":"7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd"} Jan 29 10:15:38 crc kubenswrapper[4771]: I0129 10:15:38.312496 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvh6d" event={"ID":"d8183582-ddce-4f45-936d-6be11c643eaa","Type":"ContainerStarted","Data":"d0b37fa1a753741ab87572433d82f5e9c3dfa1bb16539e8138a693d9a2c23c72"} Jan 29 10:15:38 crc kubenswrapper[4771]: I0129 10:15:38.943237 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_460724f7-49b9-475d-a983-5ffa6998815d/nova-api-log/0.log" Jan 29 10:15:39 crc kubenswrapper[4771]: I0129 10:15:39.090986 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2dddef1c-bcb1-48f7-816d-d24276dd7571/nova-cell0-conductor-conductor/0.log" Jan 29 10:15:39 crc kubenswrapper[4771]: I0129 10:15:39.248641 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_03fe6bc4-290b-46aa-b934-4d6849586b41/nova-cell1-conductor-conductor/0.log" Jan 29 10:15:39 crc kubenswrapper[4771]: I0129 10:15:39.321568 4771 generic.go:334] "Generic (PLEG): container finished" podID="d8183582-ddce-4f45-936d-6be11c643eaa" containerID="c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833" exitCode=0 Jan 29 10:15:39 crc kubenswrapper[4771]: I0129 10:15:39.321609 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvh6d" event={"ID":"d8183582-ddce-4f45-936d-6be11c643eaa","Type":"ContainerDied","Data":"c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833"} Jan 29 10:15:39 crc kubenswrapper[4771]: I0129 10:15:39.448444 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_38681ed7-61ea-42cb-b2bf-8deba8d236bf/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 10:15:39 crc kubenswrapper[4771]: I0129 10:15:39.484639 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-8wnhx_54a83150-21fe-4085-ad4c-5eb77724684a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:39 crc kubenswrapper[4771]: I0129 10:15:39.493702 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_460724f7-49b9-475d-a983-5ffa6998815d/nova-api-api/0.log" Jan 29 10:15:40 crc kubenswrapper[4771]: I0129 10:15:40.250018 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_61ef1c72-f256-4e8a-ad21-b4cae84753e5/nova-metadata-log/0.log" Jan 29 10:15:40 crc kubenswrapper[4771]: I0129 10:15:40.335471 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvh6d" event={"ID":"d8183582-ddce-4f45-936d-6be11c643eaa","Type":"ContainerStarted","Data":"2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d"} Jan 29 10:15:40 crc kubenswrapper[4771]: I0129 10:15:40.359270 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fvh6d" podStartSLOduration=2.9408537089999998 podStartE2EDuration="4.359248873s" podCreationTimestamp="2026-01-29 10:15:36 +0000 UTC" firstStartedPulling="2026-01-29 10:15:38.314722087 +0000 UTC m=+4158.437562314" lastFinishedPulling="2026-01-29 10:15:39.733117251 +0000 UTC m=+4159.855957478" observedRunningTime="2026-01-29 10:15:40.356605041 +0000 UTC m=+4160.479445278" watchObservedRunningTime="2026-01-29 10:15:40.359248873 +0000 UTC m=+4160.482089100" Jan 29 10:15:40 crc kubenswrapper[4771]: I0129 10:15:40.526532 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ff06b9bb-31fc-437f-96fb-6ab586bb9918/mysql-bootstrap/0.log" Jan 29 10:15:40 crc kubenswrapper[4771]: I0129 10:15:40.609386 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d24a45bb-85f8-42c2-bee5-0b5407bdc52e/nova-scheduler-scheduler/0.log" Jan 29 10:15:40 crc kubenswrapper[4771]: I0129 10:15:40.699555 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ff06b9bb-31fc-437f-96fb-6ab586bb9918/mysql-bootstrap/0.log" Jan 29 10:15:40 crc kubenswrapper[4771]: I0129 10:15:40.824528 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ff06b9bb-31fc-437f-96fb-6ab586bb9918/galera/0.log" Jan 29 10:15:40 crc kubenswrapper[4771]: I0129 10:15:40.997011 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd70aa50-2651-4840-a551-44a608ccb08b/mysql-bootstrap/0.log" Jan 29 10:15:41 crc kubenswrapper[4771]: I0129 10:15:41.727950 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd70aa50-2651-4840-a551-44a608ccb08b/mysql-bootstrap/0.log" Jan 29 10:15:41 crc kubenswrapper[4771]: I0129 10:15:41.745379 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd70aa50-2651-4840-a551-44a608ccb08b/galera/0.log" Jan 29 10:15:41 crc kubenswrapper[4771]: I0129 10:15:41.788810 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_61ef1c72-f256-4e8a-ad21-b4cae84753e5/nova-metadata-metadata/0.log" Jan 29 10:15:41 crc kubenswrapper[4771]: I0129 10:15:41.970269 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0605e923-8ce6-4789-89f7-214d47422865/openstackclient/0.log" Jan 29 10:15:41 crc kubenswrapper[4771]: I0129 10:15:41.980238 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hqvn7_e1390576-f674-420d-93a7-2bee6d52f9f0/ovn-controller/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.184142 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sjg8v_2483b251-476f-45b5-a46e-60f4dfe1024f/ovsdb-server-init/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.248125 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q7x7j_3bde3888-b70c-434c-b553-da79ce5ff68d/openstack-network-exporter/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.425611 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sjg8v_2483b251-476f-45b5-a46e-60f4dfe1024f/ovsdb-server-init/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.459019 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sjg8v_2483b251-476f-45b5-a46e-60f4dfe1024f/ovsdb-server/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.462413 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sjg8v_2483b251-476f-45b5-a46e-60f4dfe1024f/ovs-vswitchd/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.667181 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-99hxx_b5312b86-21a2-4c0e-81ea-6a7fb49bd4fc/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.692741 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3656052-f3d0-4665-9fc7-8236cede743b/openstack-network-exporter/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.779414 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3656052-f3d0-4665-9fc7-8236cede743b/ovn-northd/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.934839 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_34887d57-0fb9-4617-b9d0-1338663bd16b/ovsdbserver-nb/0.log" Jan 29 10:15:42 crc kubenswrapper[4771]: I0129 10:15:42.946165 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_34887d57-0fb9-4617-b9d0-1338663bd16b/openstack-network-exporter/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.131410 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d2c641f1-e0cc-4892-8e36-9a70ee2bacc9/openstack-network-exporter/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.168562 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d2c641f1-e0cc-4892-8e36-9a70ee2bacc9/ovsdbserver-sb/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.317916 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b66bb6fb-89w2j_93c318ae-5098-47e2-a09d-e67fe5124ed5/placement-api/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.389102 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23244fea-bb17-4ba0-b353-d4f98af3d93d/setup-container/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.479419 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b66bb6fb-89w2j_93c318ae-5098-47e2-a09d-e67fe5124ed5/placement-log/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.620630 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23244fea-bb17-4ba0-b353-d4f98af3d93d/setup-container/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.696333 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23244fea-bb17-4ba0-b353-d4f98af3d93d/rabbitmq/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.719598 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_222f1966-eb07-4bcb-986d-70287a36fc90/setup-container/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.926895 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_222f1966-eb07-4bcb-986d-70287a36fc90/setup-container/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.957482 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_222f1966-eb07-4bcb-986d-70287a36fc90/rabbitmq/0.log" Jan 29 10:15:43 crc kubenswrapper[4771]: I0129 10:15:43.977952 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nqpbs_665612c6-6f64-4e6e-a9d7-770665c7abff/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.176914 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4hnbg_1861fab3-9c27-41e5-b792-6df4cb346a1d/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.195425 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rchh6_11b71af0-5437-4b20-a2a0-68897b1f8c78/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.271211 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.271265 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.405910 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dzfr4_183736e8-0ae6-459f-9dbc-1b5a9d60539d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.486958 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mb82t_b4199724-f14d-423d-82f1-8a1438e624fb/ssh-known-hosts-edpm-deployment/0.log" Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.695778 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55bc4c6647-vmgxk_70ca5b45-1804-4830-8ede-b28279d8d4ce/proxy-server/0.log" Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.767798 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55bc4c6647-vmgxk_70ca5b45-1804-4830-8ede-b28279d8d4ce/proxy-httpd/0.log" Jan 29 10:15:44 crc kubenswrapper[4771]: I0129 10:15:44.791291 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-snzfb_543a7e6c-ab47-4720-b5f0-6b0800904d36/swift-ring-rebalance/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.039506 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/account-auditor/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.052276 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/account-reaper/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.077915 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/account-replicator/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.162687 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/account-server/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.244441 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/container-auditor/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.307457 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/container-replicator/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.338477 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/container-server/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.355311 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/container-updater/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.473714 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-expirer/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.504645 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-auditor/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.587786 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-server/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.598258 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-replicator/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.688774 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/object-updater/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.711722 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/rsync/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.772112 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e6ce7b26-bcc5-4306-ab2c-5691cceeb18f/swift-recon-cron/0.log" Jan 29 10:15:45 crc kubenswrapper[4771]: I0129 10:15:45.998535 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hzbb6_10f62904-f030-4614-accd-3c95e39c2c6a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:46 crc kubenswrapper[4771]: I0129 10:15:46.013263 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_be095875-5658-44fb-9c4b-90d1bc093cf3/tempest-tests-tempest-tests-runner/0.log" Jan 29 10:15:46 crc kubenswrapper[4771]: I0129 10:15:46.237738 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_71cc9425-62dc-4336-8f57-765e49ea1b7e/test-operator-logs-container/0.log" Jan 29 10:15:46 crc kubenswrapper[4771]: I0129 10:15:46.292539 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bssfw_8c8a65a0-1d3a-413d-964f-71d69bb1c3b7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 10:15:46 crc kubenswrapper[4771]: I0129 10:15:46.943449 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:46 crc kubenswrapper[4771]: I0129 10:15:46.943496 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:47 crc kubenswrapper[4771]: I0129 10:15:47.002983 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:47 crc kubenswrapper[4771]: I0129 10:15:47.131979 4771 scope.go:117] "RemoveContainer" containerID="d2cab39e5c6bcadbbfd6ef771c8824e708f2f2a4dd0450f4eb03ff98e907fee7" Jan 29 10:15:47 crc kubenswrapper[4771]: I0129 10:15:47.468623 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:47 crc kubenswrapper[4771]: I0129 10:15:47.527154 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvh6d"] Jan 29 10:15:49 crc kubenswrapper[4771]: I0129 10:15:49.421785 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fvh6d" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" containerName="registry-server" containerID="cri-o://2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d" gracePeriod=2 Jan 29 10:15:49 crc kubenswrapper[4771]: I0129 10:15:49.936997 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.046624 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwvkg\" (UniqueName: \"kubernetes.io/projected/d8183582-ddce-4f45-936d-6be11c643eaa-kube-api-access-mwvkg\") pod \"d8183582-ddce-4f45-936d-6be11c643eaa\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.046877 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-utilities\") pod \"d8183582-ddce-4f45-936d-6be11c643eaa\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.046923 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-catalog-content\") pod \"d8183582-ddce-4f45-936d-6be11c643eaa\" (UID: \"d8183582-ddce-4f45-936d-6be11c643eaa\") " Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.050397 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-utilities" (OuterVolumeSpecName: "utilities") pod "d8183582-ddce-4f45-936d-6be11c643eaa" (UID: "d8183582-ddce-4f45-936d-6be11c643eaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.064153 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8183582-ddce-4f45-936d-6be11c643eaa-kube-api-access-mwvkg" (OuterVolumeSpecName: "kube-api-access-mwvkg") pod "d8183582-ddce-4f45-936d-6be11c643eaa" (UID: "d8183582-ddce-4f45-936d-6be11c643eaa"). InnerVolumeSpecName "kube-api-access-mwvkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.082812 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8183582-ddce-4f45-936d-6be11c643eaa" (UID: "d8183582-ddce-4f45-936d-6be11c643eaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.149957 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwvkg\" (UniqueName: \"kubernetes.io/projected/d8183582-ddce-4f45-936d-6be11c643eaa-kube-api-access-mwvkg\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.149986 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.149996 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8183582-ddce-4f45-936d-6be11c643eaa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.431218 4771 generic.go:334] "Generic (PLEG): container finished" podID="d8183582-ddce-4f45-936d-6be11c643eaa" containerID="2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d" exitCode=0 Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.431279 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvh6d" event={"ID":"d8183582-ddce-4f45-936d-6be11c643eaa","Type":"ContainerDied","Data":"2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d"} Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.431311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvh6d" event={"ID":"d8183582-ddce-4f45-936d-6be11c643eaa","Type":"ContainerDied","Data":"d0b37fa1a753741ab87572433d82f5e9c3dfa1bb16539e8138a693d9a2c23c72"} Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.431328 4771 scope.go:117] "RemoveContainer" containerID="2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.434571 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvh6d" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.450605 4771 scope.go:117] "RemoveContainer" containerID="c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.509398 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvh6d"] Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.518321 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvh6d"] Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.518514 4771 scope.go:117] "RemoveContainer" containerID="7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.535169 4771 scope.go:117] "RemoveContainer" containerID="2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d" Jan 29 10:15:50 crc kubenswrapper[4771]: E0129 10:15:50.537122 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d\": container with ID starting with 2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d not found: ID does not exist" containerID="2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.537165 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d"} err="failed to get container status \"2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d\": rpc error: code = NotFound desc = could not find container \"2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d\": container with ID starting with 2f5d1fdb2909ddce1897f537318f74d7fe087725f9d818a4f2d086a81023263d not found: ID does not exist" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.537192 4771 scope.go:117] "RemoveContainer" containerID="c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833" Jan 29 10:15:50 crc kubenswrapper[4771]: E0129 10:15:50.537434 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833\": container with ID starting with c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833 not found: ID does not exist" containerID="c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.537451 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833"} err="failed to get container status \"c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833\": rpc error: code = NotFound desc = could not find container \"c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833\": container with ID starting with c277fe684ee29847520e349e0d88af50816ba0aeacc9949e104e511534d65833 not found: ID does not exist" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.537466 4771 scope.go:117] "RemoveContainer" containerID="7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd" Jan 29 10:15:50 crc kubenswrapper[4771]: E0129 10:15:50.537643 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd\": container with ID starting with 7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd not found: ID does not exist" containerID="7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.537662 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd"} err="failed to get container status \"7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd\": rpc error: code = NotFound desc = could not find container \"7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd\": container with ID starting with 7f8f278abb417445c40e3098bdf1aeb3ef29d09ce18b4c4edb791d5f9786f0dd not found: ID does not exist" Jan 29 10:15:50 crc kubenswrapper[4771]: I0129 10:15:50.848004 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" path="/var/lib/kubelet/pods/d8183582-ddce-4f45-936d-6be11c643eaa/volumes" Jan 29 10:15:56 crc kubenswrapper[4771]: I0129 10:15:56.683388 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_71cb4a34-0373-453e-b103-3e6e0a00ff0c/memcached/0.log" Jan 29 10:16:13 crc kubenswrapper[4771]: I0129 10:16:13.887104 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/util/0.log" Jan 29 10:16:13 crc kubenswrapper[4771]: I0129 10:16:13.998396 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/util/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.054089 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/pull/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.057575 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/pull/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.250544 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/pull/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.271419 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.271673 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.271850 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.272609 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a54ed82061c8a1ebeeee137c8955ccd1a70441f4b919a09c91b46f159da93f1"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.272754 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://5a54ed82061c8a1ebeeee137c8955ccd1a70441f4b919a09c91b46f159da93f1" gracePeriod=600 Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.288871 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/extract/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.310839 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93ad486e170fa5abb6d2daff1531cb761c181826699754810e48b578ees79g7_49bb4d23-3c87-4d67-adac-18be6e729790/util/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.564682 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-t96kk_742db07e-b8fa-472a-824c-ce57c4e3bca5/manager/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.630905 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-w5l7q_6221aa48-bc7d-4a2f-9897-41dae47815e7/manager/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.677610 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="5a54ed82061c8a1ebeeee137c8955ccd1a70441f4b919a09c91b46f159da93f1" exitCode=0 Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.677673 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"5a54ed82061c8a1ebeeee137c8955ccd1a70441f4b919a09c91b46f159da93f1"} Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.677714 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerStarted","Data":"7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921"} Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.677733 4771 scope.go:117] "RemoveContainer" containerID="62425ff8efe6f8e7538b7e8a6246c5455143911f9a14f8ab0356b45b184e0dba" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.827736 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-kzwjr_b7b0237e-e4f4-4ff1-81f6-3f54c39d6a8e/manager/0.log" Jan 29 10:16:14 crc kubenswrapper[4771]: I0129 10:16:14.975817 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-59xvx_29710697-a286-413b-a7ce-01631b4cc6de/manager/0.log" Jan 29 10:16:15 crc kubenswrapper[4771]: I0129 10:16:15.061065 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-2cq8m_e0d87d52-0e91-4f3f-bbf2-228b57bbcff7/manager/0.log" Jan 29 10:16:15 crc kubenswrapper[4771]: I0129 10:16:15.181974 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-xczpv_7f152534-d323-4bdc-9d0e-86e673b65a56/manager/0.log" Jan 29 10:16:15 crc kubenswrapper[4771]: I0129 10:16:15.406458 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-rjn7t_f9b6f2b9-26dd-44f5-859d-f9a1828d726d/manager/0.log" Jan 29 10:16:15 crc kubenswrapper[4771]: I0129 10:16:15.489571 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-ltjpd_be5b01ce-6d7f-40e9-9e6e-3291fab1d242/manager/0.log" Jan 29 10:16:15 crc kubenswrapper[4771]: I0129 10:16:15.579172 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-wfjjb_8373cf12-3567-409b-ae85-1f530e91c86a/manager/0.log" Jan 29 10:16:15 crc kubenswrapper[4771]: I0129 10:16:15.643377 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-c298s_da6fb9cf-4fe9-41a8-a645-0d98a36e9472/manager/0.log" Jan 29 10:16:15 crc kubenswrapper[4771]: I0129 10:16:15.830316 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-qpqh8_013a2529-271b-4c1d-8ac4-3b443a9d1069/manager/0.log" Jan 29 10:16:15 crc kubenswrapper[4771]: I0129 10:16:15.963438 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-j2vm7_a9c9f6d2-b488-4184-b7dd-46e228737c64/manager/0.log" Jan 29 10:16:16 crc kubenswrapper[4771]: I0129 10:16:16.073101 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-j8578_bbfc6317-5079-4f9f-83e0-9f93970a0710/manager/0.log" Jan 29 10:16:16 crc kubenswrapper[4771]: I0129 10:16:16.228908 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-q74t2_9ae26fe7-fcd5-4006-aa5d-133b8b91e521/manager/0.log" Jan 29 10:16:16 crc kubenswrapper[4771]: I0129 10:16:16.269139 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dfj92k_f5c86f6b-dae6-4551-9413-df4e429c0ffa/manager/0.log" Jan 29 10:16:16 crc kubenswrapper[4771]: I0129 10:16:16.628386 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5f8b9f866c-gltlv_ca41b7ae-086c-41c0-b397-3239655e4d1d/operator/0.log" Jan 29 10:16:17 crc kubenswrapper[4771]: I0129 10:16:17.362379 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-mkl7v_488821bb-04ee-4c62-b4a3-ac83d74a8919/manager/0.log" Jan 29 10:16:17 crc kubenswrapper[4771]: I0129 10:16:17.385181 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xs5n2_4caf16d8-dac6-4281-96a5-97e82d2a828f/registry-server/0.log" Jan 29 10:16:17 crc kubenswrapper[4771]: I0129 10:16:17.589716 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-w9c7v_c594b46c-4d8f-4604-a70d-91544ff13805/manager/0.log" Jan 29 10:16:17 crc kubenswrapper[4771]: I0129 10:16:17.662297 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9gz6m_40f4ff01-59ff-4cb1-a683-6e1da9756691/operator/0.log" Jan 29 10:16:17 crc kubenswrapper[4771]: I0129 10:16:17.691986 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cf4dc6f96-vhpg8_72d8430d-b468-4e7b-a568-bb12c9a4c856/manager/0.log" Jan 29 10:16:17 crc kubenswrapper[4771]: I0129 10:16:17.767955 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-p7tj6_688a9f5e-ab0d-4975-a033-a8cdf403fd9e/manager/0.log" Jan 29 10:16:17 crc kubenswrapper[4771]: I0129 10:16:17.888838 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-46m5f_5c0452ac-093e-45b1-825f-3ba01ed93425/manager/0.log" Jan 29 10:16:17 crc kubenswrapper[4771]: I0129 10:16:17.952719 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pkgdd_c7e92467-0347-42d4-9628-639368c69b80/manager/0.log" Jan 29 10:16:18 crc kubenswrapper[4771]: I0129 10:16:18.266103 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-66qbk_5048415a-36d8-47a9-aed1-f7395e309ce3/manager/0.log" Jan 29 10:16:38 crc kubenswrapper[4771]: I0129 10:16:38.652896 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qvr68_34f0263f-c771-4ef0-91be-9d37f9ba6d60/control-plane-machine-set-operator/0.log" Jan 29 10:16:38 crc kubenswrapper[4771]: I0129 10:16:38.854674 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rsp6c_acd89578-60c3-4368-9b2c-59dc899d1a08/kube-rbac-proxy/0.log" Jan 29 10:16:39 crc kubenswrapper[4771]: I0129 10:16:39.129584 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rsp6c_acd89578-60c3-4368-9b2c-59dc899d1a08/machine-api-operator/0.log" Jan 29 10:16:51 crc kubenswrapper[4771]: I0129 10:16:51.781565 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-dds85_0eb51574-328d-4156-aa8d-50355bb9d9c2/cert-manager-controller/0.log" Jan 29 10:16:51 crc kubenswrapper[4771]: I0129 10:16:51.998317 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7r6xt_b8b4db5a-9eaa-4640-8031-185eede7de9b/cert-manager-cainjector/0.log" Jan 29 10:16:52 crc kubenswrapper[4771]: I0129 10:16:52.059549 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-knhgs_fd41faef-aa84-4754-8dc1-36aeafc1e4c3/cert-manager-webhook/0.log" Jan 29 10:17:04 crc kubenswrapper[4771]: I0129 10:17:04.483584 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-cv75t_f3164bb7-413d-4fc4-b166-62ea6f7840cd/nmstate-console-plugin/0.log" Jan 29 10:17:04 crc kubenswrapper[4771]: I0129 10:17:04.683169 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ht6nn_a3334432-223c-4661-b12b-ec8524c6439d/nmstate-handler/0.log" Jan 29 10:17:04 crc kubenswrapper[4771]: I0129 10:17:04.747589 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dcp48_1755a15c-b178-498c-a5ca-077feb480beb/kube-rbac-proxy/0.log" Jan 29 10:17:04 crc kubenswrapper[4771]: I0129 10:17:04.875611 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dcp48_1755a15c-b178-498c-a5ca-077feb480beb/nmstate-metrics/0.log" Jan 29 10:17:04 crc kubenswrapper[4771]: I0129 10:17:04.916146 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-zq5mf_6a2f99d9-e297-4f09-8afd-3ab95322be73/nmstate-operator/0.log" Jan 29 10:17:05 crc kubenswrapper[4771]: I0129 10:17:05.026114 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-dgdkp_8b851beb-4218-4b45-8f3a-695b6c6cd02f/nmstate-webhook/0.log" Jan 29 10:17:35 crc kubenswrapper[4771]: I0129 10:17:35.521819 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mgllh_a1bccade-3951-4e67-9078-70e904be5b4c/kube-rbac-proxy/0.log" Jan 29 10:17:35 crc kubenswrapper[4771]: I0129 10:17:35.647977 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mgllh_a1bccade-3951-4e67-9078-70e904be5b4c/controller/0.log" Jan 29 10:17:35 crc kubenswrapper[4771]: I0129 10:17:35.761404 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-frr-files/0.log" Jan 29 10:17:35 crc kubenswrapper[4771]: I0129 10:17:35.981084 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-frr-files/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.002748 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-reloader/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.023133 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-metrics/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.033973 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-reloader/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.146829 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-frr-files/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.195677 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-reloader/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.232516 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-metrics/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.240422 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-metrics/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.446344 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-frr-files/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.468270 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-metrics/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.468526 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/cp-reloader/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.482773 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/controller/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.692967 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/frr-metrics/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.693677 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/kube-rbac-proxy-frr/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.700539 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/kube-rbac-proxy/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.878994 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6sfz7_5c791596-99d9-4d8f-ba02-c4b866a007a4/frr-k8s-webhook-server/0.log" Jan 29 10:17:36 crc kubenswrapper[4771]: I0129 10:17:36.907899 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/reloader/0.log" Jan 29 10:17:37 crc kubenswrapper[4771]: I0129 10:17:37.163395 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-86b88966b-ts5vb_a0a8dfb7-3f50-4649-ade4-04de19016aaf/manager/0.log" Jan 29 10:17:37 crc kubenswrapper[4771]: I0129 10:17:37.280688 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-88c44cd79-5zvsz_760f2b5f-d6d9-4bae-bb03-02c91232b71d/webhook-server/0.log" Jan 29 10:17:37 crc kubenswrapper[4771]: I0129 10:17:37.452569 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-44br2_05c9b0d5-8464-4769-bb43-685213c34f16/kube-rbac-proxy/0.log" Jan 29 10:17:37 crc kubenswrapper[4771]: I0129 10:17:37.958916 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-44br2_05c9b0d5-8464-4769-bb43-685213c34f16/speaker/0.log" Jan 29 10:17:38 crc kubenswrapper[4771]: I0129 10:17:38.186991 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-f4cw8_8579bfb4-69ea-4f49-aefd-46082a0d7eb0/frr/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.041683 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/util/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.292311 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/pull/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.314396 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/pull/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.364106 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/util/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.602262 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/util/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.644197 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/extract/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.646313 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctpcvr_38d92cc8-2fa5-4a7b-8e90-214597eb9fc0/pull/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.773154 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/util/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.996173 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/pull/0.log" Jan 29 10:17:53 crc kubenswrapper[4771]: I0129 10:17:53.999019 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/util/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.006860 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/pull/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.216837 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/extract/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.221046 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/util/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.229556 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136v8j7_c8d025a2-ea4f-4656-8c6f-23b6fc4ffc33/pull/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.380193 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-utilities/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.571667 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-content/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.583662 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-utilities/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.617951 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-content/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.808373 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-content/0.log" Jan 29 10:17:54 crc kubenswrapper[4771]: I0129 10:17:54.884798 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/extract-utilities/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.126794 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-utilities/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.143369 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dpzmf_aae2d3e3-66ee-4633-adb9-195a80952a82/registry-server/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.279263 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-utilities/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.326831 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-content/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.372184 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-content/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.579669 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-content/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.580648 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/extract-utilities/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.635657 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcstn"] Jan 29 10:17:55 crc kubenswrapper[4771]: E0129 10:17:55.642173 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" containerName="extract-utilities" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.642224 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" containerName="extract-utilities" Jan 29 10:17:55 crc kubenswrapper[4771]: E0129 10:17:55.642247 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" containerName="extract-content" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.642255 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" containerName="extract-content" Jan 29 10:17:55 crc kubenswrapper[4771]: E0129 10:17:55.642276 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" containerName="registry-server" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.642283 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" containerName="registry-server" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.642518 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8183582-ddce-4f45-936d-6be11c643eaa" containerName="registry-server" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.644505 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.650740 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcstn"] Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.811781 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-utilities\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.812479 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9dd\" (UniqueName: \"kubernetes.io/projected/c1a7478e-1cb9-4da2-be49-06920cea7fa7-kube-api-access-kv9dd\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.812854 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-catalog-content\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.862312 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ts5f5_f98a10ab-5df8-4994-b6a3-c62c3c3a8c82/marketplace-operator/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.914368 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-utilities\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.914447 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv9dd\" (UniqueName: \"kubernetes.io/projected/c1a7478e-1cb9-4da2-be49-06920cea7fa7-kube-api-access-kv9dd\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.914518 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-catalog-content\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.915087 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-catalog-content\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.916790 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-utilities\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.926585 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-utilities/0.log" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.937543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv9dd\" (UniqueName: \"kubernetes.io/projected/c1a7478e-1cb9-4da2-be49-06920cea7fa7-kube-api-access-kv9dd\") pod \"certified-operators-vcstn\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:55 crc kubenswrapper[4771]: I0129 10:17:55.994172 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:17:56 crc kubenswrapper[4771]: I0129 10:17:56.421450 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sss7k_48ca40d1-6622-443e-b79e-fe6896d2f66d/registry-server/0.log" Jan 29 10:17:56 crc kubenswrapper[4771]: I0129 10:17:56.514308 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcstn"] Jan 29 10:17:56 crc kubenswrapper[4771]: I0129 10:17:56.569309 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-utilities/0.log" Jan 29 10:17:56 crc kubenswrapper[4771]: I0129 10:17:56.665930 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcstn" event={"ID":"c1a7478e-1cb9-4da2-be49-06920cea7fa7","Type":"ContainerStarted","Data":"3cb57a2768ad49932d1cffad473b0f4cd4e336cf7222ad70eb777374b2dffab4"} Jan 29 10:17:56 crc kubenswrapper[4771]: I0129 10:17:56.695370 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-content/0.log" Jan 29 10:17:56 crc kubenswrapper[4771]: I0129 10:17:56.698366 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-content/0.log" Jan 29 10:17:56 crc kubenswrapper[4771]: I0129 10:17:56.950547 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-content/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.027580 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/extract-utilities/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.106100 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-utilities/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.132114 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nfdkd_8f515b08-46ce-4d24-ba30-d3b4b9bee0f1/registry-server/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.299642 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-utilities/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.305922 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-content/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.319505 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-content/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.492176 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-content/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.520615 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/extract-utilities/0.log" Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.687611 4771 generic.go:334] "Generic (PLEG): container finished" podID="c1a7478e-1cb9-4da2-be49-06920cea7fa7" containerID="44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399" exitCode=0 Jan 29 10:17:57 crc kubenswrapper[4771]: I0129 10:17:57.687661 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcstn" event={"ID":"c1a7478e-1cb9-4da2-be49-06920cea7fa7","Type":"ContainerDied","Data":"44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399"} Jan 29 10:17:58 crc kubenswrapper[4771]: I0129 10:17:58.222770 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4tk6z_c03f3394-a05b-4de0-ba06-1191b58d6fa8/registry-server/0.log" Jan 29 10:17:58 crc kubenswrapper[4771]: I0129 10:17:58.701278 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcstn" event={"ID":"c1a7478e-1cb9-4da2-be49-06920cea7fa7","Type":"ContainerStarted","Data":"0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a"} Jan 29 10:18:00 crc kubenswrapper[4771]: I0129 10:18:00.727329 4771 generic.go:334] "Generic (PLEG): container finished" podID="c1a7478e-1cb9-4da2-be49-06920cea7fa7" containerID="0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a" exitCode=0 Jan 29 10:18:00 crc kubenswrapper[4771]: I0129 10:18:00.727405 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcstn" event={"ID":"c1a7478e-1cb9-4da2-be49-06920cea7fa7","Type":"ContainerDied","Data":"0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a"} Jan 29 10:18:01 crc kubenswrapper[4771]: I0129 10:18:01.741638 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcstn" event={"ID":"c1a7478e-1cb9-4da2-be49-06920cea7fa7","Type":"ContainerStarted","Data":"f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17"} Jan 29 10:18:01 crc kubenswrapper[4771]: I0129 10:18:01.764770 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcstn" podStartSLOduration=3.3071138749999998 podStartE2EDuration="6.764752371s" podCreationTimestamp="2026-01-29 10:17:55 +0000 UTC" firstStartedPulling="2026-01-29 10:17:57.69188749 +0000 UTC m=+4297.814727727" lastFinishedPulling="2026-01-29 10:18:01.149525986 +0000 UTC m=+4301.272366223" observedRunningTime="2026-01-29 10:18:01.763891608 +0000 UTC m=+4301.886731855" watchObservedRunningTime="2026-01-29 10:18:01.764752371 +0000 UTC m=+4301.887592598" Jan 29 10:18:06 crc kubenswrapper[4771]: I0129 10:18:05.999814 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:18:06 crc kubenswrapper[4771]: I0129 10:18:06.000435 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:18:06 crc kubenswrapper[4771]: I0129 10:18:06.050586 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:18:06 crc kubenswrapper[4771]: I0129 10:18:06.882202 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:18:06 crc kubenswrapper[4771]: I0129 10:18:06.959615 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcstn"] Jan 29 10:18:08 crc kubenswrapper[4771]: I0129 10:18:08.826166 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vcstn" podUID="c1a7478e-1cb9-4da2-be49-06920cea7fa7" containerName="registry-server" containerID="cri-o://f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17" gracePeriod=2 Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.280705 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.401064 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv9dd\" (UniqueName: \"kubernetes.io/projected/c1a7478e-1cb9-4da2-be49-06920cea7fa7-kube-api-access-kv9dd\") pod \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.401266 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-catalog-content\") pod \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.401436 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-utilities\") pod \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\" (UID: \"c1a7478e-1cb9-4da2-be49-06920cea7fa7\") " Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.402175 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-utilities" (OuterVolumeSpecName: "utilities") pod "c1a7478e-1cb9-4da2-be49-06920cea7fa7" (UID: "c1a7478e-1cb9-4da2-be49-06920cea7fa7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.417558 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a7478e-1cb9-4da2-be49-06920cea7fa7-kube-api-access-kv9dd" (OuterVolumeSpecName: "kube-api-access-kv9dd") pod "c1a7478e-1cb9-4da2-be49-06920cea7fa7" (UID: "c1a7478e-1cb9-4da2-be49-06920cea7fa7"). InnerVolumeSpecName "kube-api-access-kv9dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.503558 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.503595 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv9dd\" (UniqueName: \"kubernetes.io/projected/c1a7478e-1cb9-4da2-be49-06920cea7fa7-kube-api-access-kv9dd\") on node \"crc\" DevicePath \"\"" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.838466 4771 generic.go:334] "Generic (PLEG): container finished" podID="c1a7478e-1cb9-4da2-be49-06920cea7fa7" containerID="f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17" exitCode=0 Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.838508 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcstn" event={"ID":"c1a7478e-1cb9-4da2-be49-06920cea7fa7","Type":"ContainerDied","Data":"f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17"} Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.838548 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcstn" event={"ID":"c1a7478e-1cb9-4da2-be49-06920cea7fa7","Type":"ContainerDied","Data":"3cb57a2768ad49932d1cffad473b0f4cd4e336cf7222ad70eb777374b2dffab4"} Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.838557 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcstn" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.838568 4771 scope.go:117] "RemoveContainer" containerID="f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.860120 4771 scope.go:117] "RemoveContainer" containerID="0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.889466 4771 scope.go:117] "RemoveContainer" containerID="44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.923748 4771 scope.go:117] "RemoveContainer" containerID="f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17" Jan 29 10:18:09 crc kubenswrapper[4771]: E0129 10:18:09.924222 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17\": container with ID starting with f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17 not found: ID does not exist" containerID="f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.924276 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17"} err="failed to get container status \"f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17\": rpc error: code = NotFound desc = could not find container \"f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17\": container with ID starting with f0bbc1f2def27a957349569da7c9debc421401c8381db7bd9073eb8c26f1eb17 not found: ID does not exist" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.924303 4771 scope.go:117] "RemoveContainer" containerID="0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a" Jan 29 10:18:09 crc kubenswrapper[4771]: E0129 10:18:09.924653 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a\": container with ID starting with 0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a not found: ID does not exist" containerID="0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.924815 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a"} err="failed to get container status \"0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a\": rpc error: code = NotFound desc = could not find container \"0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a\": container with ID starting with 0777e61f5a41c5be38a6c1e0e41c0f7938686e0b7ea2fe11cb71a7742ad0939a not found: ID does not exist" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.924850 4771 scope.go:117] "RemoveContainer" containerID="44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399" Jan 29 10:18:09 crc kubenswrapper[4771]: E0129 10:18:09.925218 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399\": container with ID starting with 44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399 not found: ID does not exist" containerID="44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.925237 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399"} err="failed to get container status \"44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399\": rpc error: code = NotFound desc = could not find container \"44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399\": container with ID starting with 44ad6eba9d7a331e80bfc1cffdedf452f6fc367832a1dcef99165315b5d8e399 not found: ID does not exist" Jan 29 10:18:09 crc kubenswrapper[4771]: I0129 10:18:09.961602 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1a7478e-1cb9-4da2-be49-06920cea7fa7" (UID: "c1a7478e-1cb9-4da2-be49-06920cea7fa7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:18:10 crc kubenswrapper[4771]: I0129 10:18:10.013328 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a7478e-1cb9-4da2-be49-06920cea7fa7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:18:10 crc kubenswrapper[4771]: I0129 10:18:10.176142 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcstn"] Jan 29 10:18:10 crc kubenswrapper[4771]: I0129 10:18:10.186565 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vcstn"] Jan 29 10:18:10 crc kubenswrapper[4771]: I0129 10:18:10.881843 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a7478e-1cb9-4da2-be49-06920cea7fa7" path="/var/lib/kubelet/pods/c1a7478e-1cb9-4da2-be49-06920cea7fa7/volumes" Jan 29 10:18:14 crc kubenswrapper[4771]: I0129 10:18:14.271208 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:18:14 crc kubenswrapper[4771]: I0129 10:18:14.271784 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:18:44 crc kubenswrapper[4771]: I0129 10:18:44.271517 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:18:44 crc kubenswrapper[4771]: I0129 10:18:44.272226 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.272225 4771 patch_prober.go:28] interesting pod/machine-config-daemon-79kz5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.272860 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.272908 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.273855 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921"} pod="openshift-machine-config-operator/machine-config-daemon-79kz5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.273918 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerName="machine-config-daemon" containerID="cri-o://7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" gracePeriod=600 Jan 29 10:19:14 crc kubenswrapper[4771]: E0129 10:19:14.427847 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.594595 4771 generic.go:334] "Generic (PLEG): container finished" podID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" exitCode=0 Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.594661 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" event={"ID":"12eedc7e-dceb-4fc2-b26a-5a4a87846b1a","Type":"ContainerDied","Data":"7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921"} Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.594744 4771 scope.go:117] "RemoveContainer" containerID="5a54ed82061c8a1ebeeee137c8955ccd1a70441f4b919a09c91b46f159da93f1" Jan 29 10:19:14 crc kubenswrapper[4771]: I0129 10:19:14.595359 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:19:14 crc kubenswrapper[4771]: E0129 10:19:14.595639 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:19:25 crc kubenswrapper[4771]: I0129 10:19:25.839117 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:19:25 crc kubenswrapper[4771]: E0129 10:19:25.840159 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:19:37 crc kubenswrapper[4771]: I0129 10:19:37.838497 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:19:37 crc kubenswrapper[4771]: E0129 10:19:37.840205 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:19:43 crc kubenswrapper[4771]: I0129 10:19:43.917111 4771 generic.go:334] "Generic (PLEG): container finished" podID="8718b930-7393-4bd0-8d4f-028684732b5f" containerID="238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce" exitCode=0 Jan 29 10:19:43 crc kubenswrapper[4771]: I0129 10:19:43.917196 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" event={"ID":"8718b930-7393-4bd0-8d4f-028684732b5f","Type":"ContainerDied","Data":"238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce"} Jan 29 10:19:43 crc kubenswrapper[4771]: I0129 10:19:43.918550 4771 scope.go:117] "RemoveContainer" containerID="238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce" Jan 29 10:19:44 crc kubenswrapper[4771]: I0129 10:19:44.822654 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t7jkn_must-gather-hkkx8_8718b930-7393-4bd0-8d4f-028684732b5f/gather/0.log" Jan 29 10:19:49 crc kubenswrapper[4771]: I0129 10:19:49.838116 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:19:49 crc kubenswrapper[4771]: E0129 10:19:49.839291 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:19:56 crc kubenswrapper[4771]: I0129 10:19:56.399113 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t7jkn/must-gather-hkkx8"] Jan 29 10:19:56 crc kubenswrapper[4771]: I0129 10:19:56.400000 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" podUID="8718b930-7393-4bd0-8d4f-028684732b5f" containerName="copy" containerID="cri-o://0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e" gracePeriod=2 Jan 29 10:19:56 crc kubenswrapper[4771]: I0129 10:19:56.412214 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t7jkn/must-gather-hkkx8"] Jan 29 10:19:56 crc kubenswrapper[4771]: I0129 10:19:56.853525 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t7jkn_must-gather-hkkx8_8718b930-7393-4bd0-8d4f-028684732b5f/copy/0.log" Jan 29 10:19:56 crc kubenswrapper[4771]: I0129 10:19:56.854672 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:19:56 crc kubenswrapper[4771]: I0129 10:19:56.948956 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8718b930-7393-4bd0-8d4f-028684732b5f-must-gather-output\") pod \"8718b930-7393-4bd0-8d4f-028684732b5f\" (UID: \"8718b930-7393-4bd0-8d4f-028684732b5f\") " Jan 29 10:19:56 crc kubenswrapper[4771]: I0129 10:19:56.949054 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87hsh\" (UniqueName: \"kubernetes.io/projected/8718b930-7393-4bd0-8d4f-028684732b5f-kube-api-access-87hsh\") pod \"8718b930-7393-4bd0-8d4f-028684732b5f\" (UID: \"8718b930-7393-4bd0-8d4f-028684732b5f\") " Jan 29 10:19:56 crc kubenswrapper[4771]: I0129 10:19:56.962887 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8718b930-7393-4bd0-8d4f-028684732b5f-kube-api-access-87hsh" (OuterVolumeSpecName: "kube-api-access-87hsh") pod "8718b930-7393-4bd0-8d4f-028684732b5f" (UID: "8718b930-7393-4bd0-8d4f-028684732b5f"). InnerVolumeSpecName "kube-api-access-87hsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.051586 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87hsh\" (UniqueName: \"kubernetes.io/projected/8718b930-7393-4bd0-8d4f-028684732b5f-kube-api-access-87hsh\") on node \"crc\" DevicePath \"\"" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.114458 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t7jkn_must-gather-hkkx8_8718b930-7393-4bd0-8d4f-028684732b5f/copy/0.log" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.115129 4771 generic.go:334] "Generic (PLEG): container finished" podID="8718b930-7393-4bd0-8d4f-028684732b5f" containerID="0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e" exitCode=143 Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.115261 4771 scope.go:117] "RemoveContainer" containerID="0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.115482 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t7jkn/must-gather-hkkx8" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.139222 4771 scope.go:117] "RemoveContainer" containerID="238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.181508 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8718b930-7393-4bd0-8d4f-028684732b5f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8718b930-7393-4bd0-8d4f-028684732b5f" (UID: "8718b930-7393-4bd0-8d4f-028684732b5f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.259311 4771 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8718b930-7393-4bd0-8d4f-028684732b5f-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.280766 4771 scope.go:117] "RemoveContainer" containerID="0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e" Jan 29 10:19:57 crc kubenswrapper[4771]: E0129 10:19:57.281409 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e\": container with ID starting with 0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e not found: ID does not exist" containerID="0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.281454 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e"} err="failed to get container status \"0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e\": rpc error: code = NotFound desc = could not find container \"0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e\": container with ID starting with 0587e916410f5ee911a2f90630bceb315575ce740c960300750b1ce5a9e4037e not found: ID does not exist" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.281481 4771 scope.go:117] "RemoveContainer" containerID="238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce" Jan 29 10:19:57 crc kubenswrapper[4771]: E0129 10:19:57.281903 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce\": container with ID starting with 238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce not found: ID does not exist" containerID="238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce" Jan 29 10:19:57 crc kubenswrapper[4771]: I0129 10:19:57.281952 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce"} err="failed to get container status \"238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce\": rpc error: code = NotFound desc = could not find container \"238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce\": container with ID starting with 238757eaad5fb6c811eb80d50a3d9044579f186b29d014a204fc727ba0f788ce not found: ID does not exist" Jan 29 10:19:58 crc kubenswrapper[4771]: I0129 10:19:58.851152 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8718b930-7393-4bd0-8d4f-028684732b5f" path="/var/lib/kubelet/pods/8718b930-7393-4bd0-8d4f-028684732b5f/volumes" Jan 29 10:20:02 crc kubenswrapper[4771]: I0129 10:20:02.838287 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:20:02 crc kubenswrapper[4771]: E0129 10:20:02.839156 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:20:15 crc kubenswrapper[4771]: I0129 10:20:15.838540 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:20:15 crc kubenswrapper[4771]: E0129 10:20:15.839429 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:20:27 crc kubenswrapper[4771]: I0129 10:20:27.837670 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:20:27 crc kubenswrapper[4771]: E0129 10:20:27.838585 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:20:41 crc kubenswrapper[4771]: I0129 10:20:41.839305 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:20:41 crc kubenswrapper[4771]: E0129 10:20:41.840770 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:20:47 crc kubenswrapper[4771]: I0129 10:20:47.397322 4771 scope.go:117] "RemoveContainer" containerID="b555bd84aad10e833b265478d1223f763bc05c043f13c5562b9656e47dc343fe" Jan 29 10:20:54 crc kubenswrapper[4771]: I0129 10:20:54.839466 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:20:54 crc kubenswrapper[4771]: E0129 10:20:54.840403 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:21:05 crc kubenswrapper[4771]: I0129 10:21:05.838493 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:21:05 crc kubenswrapper[4771]: E0129 10:21:05.839310 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:21:18 crc kubenswrapper[4771]: I0129 10:21:18.839122 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:21:18 crc kubenswrapper[4771]: E0129 10:21:18.840381 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:21:30 crc kubenswrapper[4771]: I0129 10:21:30.845536 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:21:30 crc kubenswrapper[4771]: E0129 10:21:30.846497 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:21:45 crc kubenswrapper[4771]: I0129 10:21:45.838413 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:21:45 crc kubenswrapper[4771]: E0129 10:21:45.839370 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:21:58 crc kubenswrapper[4771]: I0129 10:21:58.837805 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:21:58 crc kubenswrapper[4771]: E0129 10:21:58.838760 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:22:12 crc kubenswrapper[4771]: I0129 10:22:12.838633 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:22:12 crc kubenswrapper[4771]: E0129 10:22:12.840037 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:22:23 crc kubenswrapper[4771]: I0129 10:22:23.839223 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:22:23 crc kubenswrapper[4771]: E0129 10:22:23.840586 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:22:36 crc kubenswrapper[4771]: I0129 10:22:36.838871 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:22:36 crc kubenswrapper[4771]: E0129 10:22:36.839792 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:22:49 crc kubenswrapper[4771]: I0129 10:22:49.838500 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:22:49 crc kubenswrapper[4771]: E0129 10:22:49.839278 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:23:01 crc kubenswrapper[4771]: I0129 10:23:01.838106 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:23:01 crc kubenswrapper[4771]: E0129 10:23:01.838788 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:23:15 crc kubenswrapper[4771]: I0129 10:23:15.838497 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:23:15 crc kubenswrapper[4771]: E0129 10:23:15.839261 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:23:30 crc kubenswrapper[4771]: I0129 10:23:30.843221 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:23:30 crc kubenswrapper[4771]: E0129 10:23:30.843839 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:23:41 crc kubenswrapper[4771]: I0129 10:23:41.838870 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:23:41 crc kubenswrapper[4771]: E0129 10:23:41.840139 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:23:55 crc kubenswrapper[4771]: I0129 10:23:55.838032 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:23:55 crc kubenswrapper[4771]: E0129 10:23:55.838869 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a" Jan 29 10:24:10 crc kubenswrapper[4771]: I0129 10:24:10.850882 4771 scope.go:117] "RemoveContainer" containerID="7f6422bec125bdaefb77eaa8adfdec63cb3f1891826d2f1928666caa40641921" Jan 29 10:24:10 crc kubenswrapper[4771]: E0129 10:24:10.851653 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-79kz5_openshift-machine-config-operator(12eedc7e-dceb-4fc2-b26a-5a4a87846b1a)\"" pod="openshift-machine-config-operator/machine-config-daemon-79kz5" podUID="12eedc7e-dceb-4fc2-b26a-5a4a87846b1a"